Proof of Separability---A Verification Technique for a Class of Security Kernels John Rushby Proceedings of the 5th International Symposium on Programming, Springer Verlag LNCS Vol. 137, pp. 352--367, Turin, Italy, 1982 Abstract A formal model of "secure isolation" between the users of a shared computer system is presented. It is then developed into a security verification technique called "Proof of Separability" whose basis is to prove that the behaviour perceived by each user of the shared system is indistinguishable from that which could be provided by an unshared machine dedicated to his private use. Proof of Separability is suitable for the verification of security kernels which enforce the policy of isolation; it explicitly addresses issues relating to the interpretation of instructions and the flow of control (including interrupts) which have been ignored by previous treatments. INTRODUCTION Systems with stringent security requirements such as KSOS,1 KVM/370,2 and the various `Guards'3, 4 are amongst the very first computer systems to be produced under commer- cial contracts that require formal specification and verifi- cation of certain aspects of their behaviour. However, doubts have been expressed concerning whether the techniques used to verify these systems really do provide compelling evidence for their security.5, 6, 7 The purpose of this paper is to develop and justify a new and, it is argued, more appropriate technique for verifying one class of secure systems. A secure system is one which enforces certain restric- tions on access to the information which it contains, and on the communication of information between its users. A pre- cise statement of the restrictions to be enforced by a par- ticular system constitutes its security policy. Secure sys- tems are needed by the military authorities and by other agencies and institutions which process information of great sensitivity or value. In these environments, it is possible that attempts will be made to gain unauthorized access to valuable information by penetrating the defences of any com- puter system that processes it. It must be assumed that these attacks may be mounted by skilled and determined users with legitimate access to the system, or even by those involved in its design and implementation. Experience has shown that conventional operating systems cannot withstand this kind of attack, nor can they be modified to do so.8, 9, 10, 11, 12 Accordingly, attention has now turned to the construc- tion of kernelized systems: the idea is to isolate all (and only) the functions which are essential to the security of the system within a security kernel. If the kernel is `cor- rect' in some appropriate sense, then the security of the whole system is assured. The enthusiasm for this approach is due to the fact that a security kernel can be quite a small component of the total system (it is essentially an operating system nucleus) and there is, in consequence, some hope of getting it right. But since the security of the whole edifice rests on this one component, it is absolutely vital that it __ right. Compelling evidence is therefore required to attest to the security provided by a kernel. Verification, that is a formal mathematical proof that the kernel conforms to an appropriate and precise specification of `secure behaviour', is the evidence generally considered to be most convincing.13 Two main approaches have been proposed for the verifi- cation of security kernels. These are access control veri- fication14 and information flow analysis.15, 16 The first of these is concerned to prove that all accesses by users to -2- the stored representations of information are in accord with the security policy. For military systems, however, it is not sufficient merely to be sure that users cannot directly access information to which they are not authorized; it is necessary to be sure that information cannot be leaked to unauthorized users by any means whatsoever. This is the confinement problem. It was first identi- fied by Lampson17 who enumerated three kinds of channel that can be used to leak information within a system. Storage channels are those that exploit system storage such as tem- porary files and state variables, while Legitimate channels involve `piggybacking' illicit information onto a legal information channel - by modulating message length, for example. The third type of channel is the covert one (also called a timing channel) which achieves communication by modulating aspects of the system's performance (for example, its paging rate). Unlike access control verification, information flow analysis can establish the absence of stor- age and legitimate channels and for this reason it has been the verification technique preferred for certain military systems.18 Although the properties established by access control verification and by information flow analysis are undoubt- edly important ones, it is not clear that they amount to a complete guarantee of security. Both these verification techniques are applied to system descriptions from which certain `low level' aspects of system behaviour have been abstracted away. Thus autonomous input/output devices - which can modify the system state asynchronously with pro- gram execution and which, by raising interrupts, can drasti- cally change the protection state and the sequence of pro- gram execution - are absent from the system descriptions whose security is verified by these methods. This is despite the fact that penetration exercises indicate that it is precisely in their handling of these low level details that many computer systems are most vulnerable to attack - and, consequently, that these are the areas where verifica- tion of appropriate behaviour is to be most desired. In a companion paper to this,6 I have distinguished between high and low level considerations in secure system design and have proposed that different mechanisms and veri- fication techniques should be employed for each level. At the high level, the system should be conceived as a ___________ one where the significant issues are those of controlling access to information and the communication of information between conceptually separate single-user machines. The fact that all users actually happen to share the same physical machine should be masked at this level. It is precisely the task of the low level security mechanism to perform this masking and I have proposed a primitive type -3- of kernel called a separation kernel to serve this purpose. Its function is to simulate the distributed environment assumed at the higher level of conceptualization. To this end it provides each system component with a regime (or `virtual machine') whose behaviour is indistinguishable from that of a private machine dedicated to that component alone. The fundamental property to be proved of a separation kernel is that it completely isolates its regimes from one another: there must be absolutely no flow of information from one regime to another. (In practice, controlled flow of information will be required between certain regimes. I will return to this point in the final section of this paper, but for the present I want to concentrate on the sim- plest case.) For the reasons outlined earlier, neither of the estab- lished methods of security verification is adequate to the task of verifying a separation kernel: we really need a new method. The technique which I propose is a very natural one that is in accord with the intuition underlying the notion of a separation kernel. It is to prove that the system behaves as if it were composed of several totally separate machines - hence the name of this verification technique: `Proof of Separability'. Although `separability' is a straightforward notion, its formal definition in terms of a realistic system model is fairly complicated. Possibly, therefore, its own defini- tion may contain errors. The primary purpose of this paper is to convince the reader that this is not the case and that the definition given at the end of this paper is correct. My tactic will be to start off with a specification of `secure isolation' for a very simple system model and then to elaborate it until I arrive at the definition of Proof of Separability. THE SPECIFICATION OF SECURE ISOLATION To begin, we need some formal model of a `computer sys- tem'. In developing a verification technique based on information flow analysis, Feiertag and his co-workers used a conventional finite automaton for this purpose.19 At each step, the automaton consumes an input token and changes its internal state in a manner determined by its previous state and by the value of the input token consumed. At the same time, it also emits an output token whose value is deter- mined similarly. Each input and each output token is tagged with its security classification and the specification of security (for the case of isolation) is that the production of outputs of each classification may depend only on the consumption of inputs of that same classification. -4- While this is an appropriate model for a computer sys- tem viewed at a fairly high level of abstraction, it is less realistic as a model for a security kernel. A kernel is essentially an ___________ - it acts as a hardware extension and executes operations on behalf of the regimes which it supports. The identity of the regime on whose behalf it is operating at each instant is not indicated by a tag affixed to the operation by some external agent, but is determined by the kernel's own state. Furthermore, this model does not capture the instruction sequencing mechanisms that seem, intuitively, to be of vital importance to the security of a kernel. Accordingly, I shall adopt a slightly different model. I shall suppose that a system comprises a set S of states and progresses from one state to another under its own internal control. The transition from one state to the next will be determined by a NEXTSTATE function solely on the basis of the current state: if s is the current state, then its successor will be NEXTSTATE(s). Naturally, we can think of the value of NEXTSTATE(s) as being the result of `executing' some `operation' selected by a `control mecha- nism' - although I want to ignore these details for the moment and keep the initial model very austere and general. Not only must a system have some means of making progress (modelled here by the NEXTSTATE function) but, in order to be interesting, it must interact with its environment in some way: it must consume inputs and produce outputs. For outputs, I shall suppose that certain aspects of the sys- tem's internal state are made continually visible to the outside world through a `window' - modelled here by an OUT- PUT function: OUTPUT(s) is the visible aspect of state s. Real-world interpretations of this `window' are provided by the device registers of a PDP-11, for example. In contrast to outputs, which are continuously available, I shall sup- pose, initially, that inputs are presented to the system just once, right at the start. The INPUT function takes an input value, say i, as its argument and returns a system state as its result. The system, once given this externally determined initial state, thereafter proceeds from state to state under its own internal control. Because INPUT(i) is the initial state of the system, the value of i may be con- sidered to comprise, not merely an input in the conventional sense, but also the `program' which determines the system's subsequent behaviour. The idea that all the input should be presented at one go is clearly artificial since it precludes genuine interaction between the system and its environment. Accordingly, I will extend the model later in order to over- come this objection - but readers should be aware that this extension causes something of a hiatus in the development. Collecting the burden of the previous discussion together, and proceeding more formally, we may now say that a system or _______ M is a 6-tuple M = (S, I, O, NEXTSTATE, INPUT, OUTPUT) where: S is a finite, non-empty set of _______ I is a finite, non-empty set of _______ -5- O is a finite, non-empty set of ________ and NEXTSTATE: S@-->@S, INPUT: I@-->@S and OUTPUT: S@-->@O are total functions. (The reason for assum- ing _____ functions will be explained later.) The ______ ______ invoked by an input i @member@ I is the infinite sequence COMPUTATION(i) = <@s sub 0 ,s sub 1 ,s sub 2 ,... , s sub n ,... >@ where @s sub 0@ = INPUT(i) and @s sub j+1@ = NEXTSTATE(@s sub j@), @forall@ j@ge@0. The ______ of a computation is simply the sequence of outputs visible to an observer, that is: ______(_)=<______(_0),______(_1),...,______(__),...>.1) As a notational convenience, I shall allow functions to take sequences as their arguments; the interpretation is that the function is to be applied pointwise to each element of the sequence. Thus, we may rewrite (1) as RESULT(i) = OUT- PUT(COMPUTATION(i)). Now, in order to be able to discuss security, we must assume that our machine is shared by more than one user (or else there is no problem to discuss). I shall identify users with the members of a set C = {1,2@,...,@m} of `colours'. Each user must be able to make his own, personal, contribution to the machine's input and be able to observe some part of the output that is private to himself. We can model this formally by supposing that both the sets I and O (of inputs and outputs) are Cartesian products of C-indexed families of sets. That is: I = @I sup 1 ~Cross~ I sup 2 ~Cross~ ... ~Cross~ I sup m@ and O = @O sup 1 ~Cross~ O sup 2 ~Cross~ ... ~Cross~ O sup m@. A machine whose input and output sets are of this form will be said to be _________ Notice that the components of these products are distinguished by superscripted elements of C. I shall use superscripts consistently for this purpose; sub- scripts, on the other hand, will be used exclusively to identify the components of sequences. It is convenient to introduce a projection function, called EXTRACT, to pick out the individual components of members of Cartesian products of C-indexed sets. Thus, when c @member@ C, i @member@ I, and o @member@ O, EXTRACT(c,i) and EXTRACT(c,o) denote the c-coloured components of the input i and the output o, respectively. When a C-shared machine operates on an input i @member@ I, each user sees only his own component of the result. By the convention introduced previously, EXTRACT can be applied to sequences as well as to individuals and so the component of the output visible to user c @member@ C is the sequence EXTRACT(c,RESULT(i)). -6- Now the simplest and most natural definition of secure isolation is surely that the results seen by each user should depend only on his own contribution to the input. Thus, we require, @forall@ c @member@ C, @forall@ i,j @mem- ber@ I: EXTRACT(c,i) = EXTRACT(c,j) @==>@ EXTRACT(c,RESULT(i)) = EXTRACT(c,RESULT(j)). (2) Further consideration, however, indicates that this requirement is too strong for our pur- poses. The real systems, whose salient characteristics we are trying to capture in these definitions, work in a `time- sharing' fashion. That is, they first perform a few opera- tions on behalf of one user, then a few more on behalf of another and so on. While the system is operating on behalf of other users, user c should see no change in the outputs visible to him, but the ______ __ ____ which he must wait before processing resumes on his behalf may well depend upon the activity of those other users. If user c' can influence the rate at which user c is serviced, and user c can sense his rate of service, then a communication channel exists between c' and c. This is one of Lampson's `covert chan- nels' and while these channels constitute a security flaw and should be countered, their complete exclusion is beyond the scope of simple security kernel technology. Covert channels are typically noisy and of low bandwidth and are normally countered by ad-hoc techniques intended to increase noise and lower bandwidth still further. The threat which I want to completely exclude is that of storage channels and the trouble with (2) as a definition of security is that it requires the absence of covert as well as storage channels. There is little point in demanding the absence of covert channels when this cannot be achieved by the techniques under consideration. We should therefore weaken (2) so that only storage channels are forbidden and this can be done by restricting attention to the sequences of _______ in the values of the outputs, rather than the output sequences themselves. To this end, I shall introduce a CONDENSE func- tion on sequences. The CONDENSEd version of a sequence is just the sequence with every subsequence of repeated values replaced by a single copy of that value. For example: CON- DENSE(<1,1,2,2,2,3,4,4,3,3,6,6,6>) = <1,2,3,4,3,6>. I shall give a precise definition of CONDENSE shortly but first I want to press on with the definition of security. Using CONDENSE, the revised definition of secure isolation, sug- gested above, becomes: @forall@ c @member@ C, @forall@ i,j @member@ I, EXTRACT(c,i) = EXTRACT(c,j) @==>@ CONDENSE(EXTRACT(c,RESULT(i))) = CON- DENSE(EXTRACT(c,RESULT(j))). (3) Unfortunately, this defini- tion is still too strong for our purposes. Suppose that certain inputs cause one of the users to crash the machine, or to loop endlessly when he gets control. This constitutes -7- a type of security breach called `denial of service'. Defi- nition (3) requires the absence of such breaches. Like covert channels, denial of service should be excluded from a secure system - but, again like covert channels, the control of this type of security flaw is beyond the scope of the mechanisms being investigated here, for it concerns the fairness of scheduling procedures and the guaranteed termi- nation of processes. Notice, however, that no attempt to defeat denial of service threats will succeed if any of the basic machine operations can fail to terminate. It is for this reason that I have required all the functions that com- prise my machine definitions to be ______ Clearly, this requirement must be verified during any practical applica- tion of techniques derived from these definitions. Cristian20 discusses these issues in a wider context. We can weaken the definition given by (3) so that it admits denial of service while still excluding storage chan- nels if we require only that the condensed results should be equal __ ___ __ ____ ___ we don't mind if one of them stops while the other carries on, so long as they are equal while they both survive. I shall, therefore, define a weaker form of equivalence on sequences, denoted by @=wiggle@, and defined by: if and only if either X = Y or the shorter of X and Y is an initial subsequence of the other. My final definition of security is then: @forall@ c @member@ C, @forall@ i,j @member@ I, EXTRACT(c,i) = EXTRACT(c,j) @==>@ CONDENSE(EXTRACT(c,RESULT(i))) @=wiggle@ CON- DENSE(EXTRACT(c,RESULT(j))). (4) Given that we can accept (4) as a precise specification of secure isolation, our task now is to derive a series of testable conditions that are sufficient to ensure this property. THE VERIFICATION OF SECURE ISOLATION Before embarking on the main development, we need a precise definition of the CONDENSE function. Let X = <@x subb 0 ,x subb 1 ,x subb 2 ,... @>, be a sequence (either finite or infinite) and let INDICES(X) denote the set of indices appearing in X. Thus @nats@ if X is infinite, and INDICES(X) = {0,1,2@,...,@n} if X is finite and ends with @x subb n@. Now define the total function @psi@:INDICES(X) @-->@ @nats@ by @psi@(0) = 0, and, for j@>=@0 @psi@(j) if @x subb j+1@ = @x subb j@ @psi@(j+1) = @psi@(j)+1 otherwise. @psi@(j) is the number of changes of value in the sequence X, prior to @x subb j@; it is called the _________ ________ for X. The range of @psi@ -8- is either the whole of @nats@, or else it is a finite set {0,1,2, ... ,p} for some @p~>=~0@. In either case, let R denote the range of @psi@ and let @psi overbar@ be a right inverse for @psi overbar@. That is, any function such that @psi ( psi overbar (j))~=~j,~forall j~member~R@. (At least one such @psi overbar@ exists - @psi overbar (j)@ is the index of an element of X that is preceded by j changes of value.) Then define CONDENSE(X) to be the sequence: CON- DENSE(X) = @@ Notice that this sequence is independent of the choice of @psi overbar@, for if @phi@ and @mu@ are ---- two functions such that @psi ( phi (j))~=~psi ( mu (j))~=~j,~then@ @x subb {phi (j)}~=~x subb {mu (j)}@ even though it is not necessarily true that @phi (j)~=~mu (j).@ (Proof by contradiction.) We shall need the following result concerning condensed sequences: Lemma 1 Let X = @@ and Y = @@ be sequences (each either finite or infinite) and let @rho :~INDICES(X)~-->~INDICES(Y)@ be a total function such that @rho (0)~=~0@ and @forall@ @j >= 0@ @x subb j ~=~ y subb {rho (j)}@ and either @rho@(j+1) = @rho@(j) or @rho@(j+1) = @rho@(j)+1. Then CONDENSE(X) @=wiggle@ CON- DENSE(Y). (Intuitively, this states that if Y is a `slightly' con- densed version of X, then X and Y both condense to the same value.) _____ Let @psi@ and @eta@ be condenser functions for X and Y respectively. Pictorially, we have: _________________ . . . X ___________________ . . . X @rho@ _________________ . . . Y @psi@ @eta@ _________________ . . CONDENSE(Y) ___________________ . . CONDENSE(X) and the result is basically a consequence of the fact that @psi (j)~=~eta ( rho (j))@. Let @psi overbar , eta overbar , rho overbar@ be any three functions such that: @psi ( psi overbar (j))~=~j,~forall j@ in the range of @psi@, @eta ( eta overbar (j))~=~j,~forall j@ in the range of @eta@, @rho ( rho overbar (j))~=~j,~forall j@ in the range of @rho@. Notice that for any j in the range of @psi@, the value of @x subbb {psi overbar (j)}@ is independent of the choice of @psi overbar@. Similarly, the values of @x subbb {rho over- bar (l)}@ and @y subbb {eta overbar (k)}@ are independent of the precise choice of @rho overbar@ and @eta overbar@. Now -9- define @mu (k)~=~rho ( psi overbar (k)),~forall k@ in the range of @psi@. A straightforward induction on the value of j gives @psi (j)~=~eta ( rho (j)),~forall j~mem- ber~INDICES(X)@ and consequently, @eta ( mu (k))~=~eta ( rho ( psi overbar (k)))@ @~~~~~~~~=~psi ( psi overbar (k))@ = k. (@forall k@ in the range of @psi@). It also follows from the identity @psi (j)~=~eta ( rho (j))@ that the range of @psi@ is contained in that of @eta@ and hence, for k in the range of @psi@, we have: @eta ( mu (k))~=~eta ( eta overbar (k))~=~k.@ Thus @y subbb{mu (k)}~=~y subbb{eta overbar (k)}@. But @y subbb{mu (k)}~=~y subbb{rho ( psi overbar (k))}~=~x subbb{psi overbar (k)}@ and so @x subbb{psi overbar (k)}~=~y subbb{eta overbar (k)}@ and the result follows (since these are the k'th elements of CON- DENSE(X) and CONDENSE(Y), respectively). @box@ We now return to the main thread of the argument. Given that we accept (4) as a definition of security, how might we establish the presence of this property? Essentially, (4) stipulates that each user of a C-shared machine must be unaware of the activity, or even the existence, of any other user: it must ______ to him that he has the machine to himself. It is a natural and attractive idea, then, to postulate a `private' machine which he ____ have to himself and to establish (4) by proving that user c is unable to distinguish the behaviour of the C-shared machine from that which could be provided by a private machine. I shall now make these ideas precise. Let M = (S, I, O, NEXTSTATE, INPUT, OUTPUT) be C- shared machine where I = @I sup 1 ~Cross~ I sup 2 ~Cross~ ... ~Cross~ I sup n@ and O = @O sup 1 ~Cross~ O sup 2 ~Cross~ ... ~Cross~ O sup n@, and let c @member@ C. A _______ machine for c @member@ C is one with input set @I sup c@ and output set @O sup c@, say @M sup c ~=~(S sup c ,~I sup c ,~O sup c ,~NEXTSTATE sup c ,~INPUT sup c ,~OUTPUT sup c ).@ Denote the computation and result functions of @M sup c@ by @COMPUTATION sup c@ and @RESULT sup c@, respec- tively. Then @M sup c@ is an ____________ private machine for c if: @forall@ i @member@ I, CONDENSE(EXTRACT(c,RESULT(i))) @=wiggle@ CONDENSE(@RESULT sup c@(EXTRACT(c,i))) (5) That is (roughly speaking), the result obtained when an M-compatible private machine is applied to the c-component of a C-shared input must equal the c-component of the result produced by the C-shared machine M applied to the whole input. Obviously, we have: Theorem 1 A C-shared machine M is secure, if for each c @member@ C, there exists an M-compatible private machine for c. _____ Immediate from (4) and (5). @square@ Let us now consider how we might prove that a given private machine for c __ M-compatible. Direct appeal to the definition (5) is unattractive since this involves a property of (possibly infinite) sequences and will almost certainly require a proof by induction. At the cost of restricting the class of machines that we are willing to consider, we can perform the -10- induction once and for all at a meta-level and provide a more convenient set of properties that are sufficient to ensure M-compatibility. The restriction on the class of machines considered is a perfectly natural one: I shall con- sider only those C-shared machines which `time share' their activity between their different users. That is, each oper- ation carried out by the the C-shared machine performs some service for just one user. In particular, it simulates one of the operations of that user's private machine. The iden- tity of the user being serviced at any instant is a function of the current state. Consequently I shall require a func- tion COLOUR which takes a state as argument and returns the identity (colour) of the user being serviced (i.e. the user on whose behalf the next state transition is performed). I shall also require the notion of an `abstraction function' between the states of a C-shared machine and those of a pri- vate one. Theorem 2 Let M = (S, I, O, NEXTSTATE, INPUT, OUTPUT) be a C-shared machine and COLOUR: S @-->@ C a total function. Let @M sup c ~=~(S sup c ,~I sup c ,~O sup c ,~NEXTSTATE sup c ,~INPUT sup c ,~OUTPUT sup c )@ be a private machine for c @member@ C and @PHI sup c@: S @-->@ @S sup c@ a total func- tion such that @forall@ s @member@ S, @forall@ i @member@ I: COLOUR(s) = c @==>@ @PHI sup c@(NEXTSTATE(s)) = @NEXTSTATE sup c@(@PHI sup c@(s)), COLOUR(s) @notequal@ c @==>@ @PHI sup c@(NEXTSTATE(s)) = @PHI sup c@(s), @PHI sup c@(INPUT(i)) = @INPUT sup c@(EXTRACT(c,i)), and @OUTPUT sup c@(@PHI sup c@(s)) = EXTRACT(c,OUTPUT(s)). Then @M sup c@ is M-compati- ble. _____ Denote COMPUTATION(i) by P and @COMPUTATION sup c@(EXTRACT(c,i)) by Q where P = @

@ and Q = @@. Next, denote EXTRACT(c,RESULT(i)) by X and @RESULT sup c@(EXTRACT(c,i)) by Y where X = @@ and Y = @@. By definition, @x sub j@ = EXTRACT(c,OUTPUT(@p sub j@)), and @y sub j~=~OUTPUT sup c (q sub j )@ and we need to prove CONDENSE(X) @=wiggle@ CONDENSE(Y). Define @rho@: INDICES(P) @-->@ INDICES(Q) by @rho@(0) = 0, and @forall@ j @ge@ 0 @rho@(j) if COLOUR(@p sub j@) @notequal@ c, @rho@(j+1) = @rho@(j)+1 if COLOUR(@p sub j@) = c. By an ele- mentary induction on j, it follows from parts 1), 2) and 3) of the statement of the theorem that @Phi sup{c}(p sub{j})~=~q sub { rho (j)}@ and hence, by part 4) of the statement, that @OUTPUT sup c ( q sub { rho (j)})~=~EXTRACT(c,OUTPUT(p sub j )).@ That is @y sub { rho (j)}~=~x sub j .@ Thus, @rho@ (regarded now as a function from INDICES(X) to INDICES(Y)) satisfies the premises of Lemma 1 and so we conclude CONDENSE(X) @=wiggle@ CONDENSE(Y) and thereby the theorem. @square@ Let us take stock of our -11- present position. We can prove that a C-shared machine M is secure by demonstrating the existence of an M-compatible private machine for each of its users. How might we demon- strate the existence of such M-compatible private machines? A highly `constructive' approach would be to actually exhibit a private machine for each user and to prove its M- compatibility using Theorem 2. The conditions of Theorem 2 are straightforward and easily checked - it may even be pos- sible to automate much of this checking. On the other hand, the `constructive' aspect of the approach appears rather laborious: the construction of each private machine must be spelled out to the last detail. A totally different approach would be a `pure' exis- tence proof. We could seek conditions on M which are suffi- cient to guarantee, @a grave@ priori, the existence of M- compatible private machines - without ever needing to actu- ally construct these machines at all. The problem here is to find a suitable set of conditions: conditions which can be easily checked without being overly restrictive on the class of machines that can be admitted. I doubt that these incompatible requirements can be reconciled in any single set of conditions and so conclude that the search for a `pure' existence proof is not worthwhile. Since both extreme positions (the fully constructive approach and the pure existence proof) have their drawbacks, it may prove fruitful to examine the middle ground. The idea will be to specify just the `operations' of the private machine con- structively and to constrain the behaviour of the C-shared machine so that we can guarantee that the construction of the private machine _____ be completed. To do this, we shall need to elaborate our model once more. The machines we have considered until now, though very general, are rather unstructured. I now want to constrain them a little by adding more detail to the method by which a machine proceeds from one state to the next. At present, this happens as an indivisible step, modelled by the NEXTSTATE function. In any real machine, the process is more structured than this: first an `operation' is selected by some `control mechanism' and then it is `executed' to yield the next state. We can model this by supposing the machine M to be equipped with some set OPS of `operations' where each operation is a total function on states. That is OPS @subset=@ S @-->@ S. Next we suppose the existence of a total function: NEXTOP: S @-->@ OPS which corresponds to the `control mechanism'. In each state s, NEXTOP(s) is the operation which is applied to s to yield the next state. Thus NEXTSTATE(s) = NEXTOP(s)(s). If machines are con- strained to have this (more realistic) form, then the set OPS and the function NEXTOP may replace the monolithic NEXTSTATE function in their definition. We then have the following result (which guarantees the existence of a com- plete private machine, given a specification of only its -12- operations and abstraction functions): Theorem 3 Let M = (S, I, O, OPS, NEXTOP, INPUT, OUTPUT) be a (new style) C-shared machine and COLOUR: S @-->@ C a total function. Let c @member@ C and suppose there exist sets @S sup c@ of states, and @OPS sup c@ @subset=@ @S sup c@ @-->@ @S sup c@ of (total) operations on @S sup c@ together with (total) abstraction functions: @PHI sup c@: S @-->@ @S sup c@ and @ABOP sup c@: OPS @-->@ @OPS sup c@, which satisfy, @forall@ c @member@ C, @forall@s,s' @member@ S, @forall@op @member@ OPS, @forall@i,i' @member@ I: COLOUR(s) = c @==>@ @PHI sup c@(op(s)) = @ABOP sup c@(op)(@PHI sup c@(s)), COLOUR(s) @notequal@ c @==>@ @PHI sup c@(op(s)) = @PHI sup c@(s), EXTRACT(c,i) = EXTRACT(c,i') @==>@ @PHI sup c@(INPUT(i)) = @PHI sup c@(INPUT(i')), @PHI sup c@(s) = @PHI sup c@(s') @==>@ EXTRACT(c,OUTPUT(s)) = EXTRACT(c,OUTPUT(s')), and COLOUR(s) = COLOUR(s') = c and @PHI sup c@(s) = @PHI sup c@(s') @==>@ NEXTOP(s) = NEXTOP(s'). Then there exists an M-compat- ible private machine for c. _____ Define the function @NEX- TOP sup c@: @S sup c@ @-->@ @OPS sup c@ by: @NEXTOP sup c@(@PHI sup c@(s)) = @ABOP sup c@(NEXTOP(s)), @forall@ s @member@ S such that COLOUR(s) = c. Condition 5) ensures that @NEXTOP sup c@ is truly a (single-valued) function. If we define NEXTSTATE: S @-->@ S and @NEXTSTATE sup c@: @S sup c@ @-->@ @S sup c@ by NEXTSTATE(s) = NEXTOP(s)(s) @forall@ s @member@ S and @NEXTSTATE sup c@(t) = @NEXTOP sup c@(t)(t) @forall@ t @mem- ber@ @S sup c@, then conditions 1) and 2) above ensure these definitions satisfy conditions 1) and 2) of Theorem 2. Next, define @INPUT sup c@: @I sup c@ @-->@ @S sup c@ to be any total function which satisfies @INPUT sup c@(EXTRACT(c,i)) = @PHI sup c@(INPUT(i)), @forall@ i @mem- ber@ I. Condition 3) above ensures that such a function exists and that it satisfies condition 3) of Theorem 2. Finally, define @OUTPUT sup c@: @S sup c@ @-->@ @O sup c@ to be any total function satisfying @OUTPUT sup c@(@PHI sup c@(s)) = EXTRACT(c,OUTPUT(s)), @forall@ s @member@ S. Con- dition 4) above ensures the existence of such a function and also that it satisfies condition 4) of Theorem 2. We have now constructed a private machine @M sup c ~=~(S sup c , I sup c , O sup c , NEXTSTATE sup c , INPUT sup c , OUTPUT sup c )@ which satisfies all the conditions of Theorem 2 and so conclude that @M sup c@ is M-compatible. @square@ We now need to make a final adjustment to the model. The present model accepts input only once and we really want something more realistic than this. Real I/O devices do not initial- ize the system state, they modify it (by loading values into device registers, or by raising interrupts, for example). It is natural, therefore, that the functionality of INPUT -13- should be changed to: @INPUT:~S~Cross~I~-->~S@. We now need to decide ____ input occurs. On real machines, the state changes caused by I/O devices occur asynchronously, but not concurrently, with the execution of instructions. We could model this by supposing that the INPUT function is applied just prior to the NEXTSTATE function at each step. But with real machines, input does not always occur at every step: whether a device is able to deliver input at some particular instant may depend partly on its own state (whether it has any input available), partly on that of other devices (which may affect whether it can become the `bus master'), and partly on that of the CPU (which may lock out interrupts). We can model this by allowing the machine to make a non- deterministic choice whether or not to apply the INPUT func- tion at each stage. (Actually, this non-determinism does not influence the choice of security conditions given below.) Thus, the machine is now understood to start off in some arbitrary initial state @ s sub 0 @ and to proceed from state to state by: first) ________ accepting input from its environment, and second) executing an operation. That is, if the current value of the input available from the environment is i and the current state of the machine is s, then its next state will be NEXTOP(@s overbar@)(@s over- bar@), where @s overbar@ = INPUT(s,i) if the input is accepted, and @s overbar@ = s if it is not. The problem with these changes is that the behaviour of the new model is not a simple variation on that of the old - it really is a new model altogether. For this reason, it is not possible to ______ the conditions that ensure secure behaviour of the new model from those that have gone before; we have to ______ them. This is the hiatus in our orderly progress which I hinted at earlier. However, because the new model is similar to its predecessor, and because we have now gained considerable experience in formulating conditions of this sort, I believe that we can be confident of assert- ing the correct properties. The conditions that I propose are just those of the statement of Theorem 3, but with its condition 3) replaced by the following pair of conditions which reflect the changed interpretation of the INPUT function (condition 3a is similar to the previous condition 3; condition 3b is new): EXTRACT(c,i) = EXTRACT(c,i') @==>@ @PHI sup c@(INPUT(s,i)) = @PHI sup c@(INPUT(s,i')), @PHI sup c@(s) = @PHI sup c@(s') @==>@ @PHI sup c@(INPUT(s,i)) = @PHI sup c@(INPUT(s',i)). The reader may wonder why I did not use a model with realistic I/O behaviour right from the start. The reason is that I can find no transparently simple speci- fications of security (corresponding, for example, to -14- equations (2) to (4)) for such a model. The definition of Proof of Separability would have to be asserted `out of the blue' and the goal of arguing its correctness would have been worse, rather than better, served. PROOF OF SEPARABILITY We have now derived the formal state- ment of the six conditions that constitute the security ver- ificaton technique which I call `Proof of Separability'. Using `RED' as a more vivid name for the quantified colour c, these conditions may be expressed informally as follows: 1) When an operation is executed on behalf of the RED user, the effects which that user perceives must be capable of ________ description in terms of the objects known to him. 2) When an operation is executed on behalf of the RED user, other users should perceive no effects at all. 3a) Only RED I/O devices may affect the state perceived by the RED user. 3b) I/O devices must not be able to cause dissimilar behaviour to be exhibited by states which the RED user perceives as identical. 4) RED I/O devices must not be able to perceive differ- ences between states which the RED user perceives as identical. 5) The selection of the next operation to be executed on behalf of the RED user must only depend on the state of his regime. Interpreted thus, I believe these six conditions have considerable intuitive appeal as a comprehensive statement of what must be proved in order to establish secure isola- tion between a number of users sharing a single machine. I hope the development that preceded their formulation has convinced the reader that they are the _____ conditions. Of course, even the right conditions will be of no practical use if they are so strong that real systems cannot satisfy them. From this point of view, Proof of Separabil- ity suffers from a serious drawback: it is specific to the highly restrictive policy of isolation. Most real systems must allow some communication between their users and the aim of security verification is then to prove that communi- cation only takes place in accordance with a stated policy. It is actually rather easy to modify Proof of Separability so that it does permit some forms of inter-user communica- tion: we simply relax its second condition in a controlled manner. For example, if the RED user is to be allowed to communicate information to the BLACK user through use of the -15- WRITE operation, we just delete requirement 2) of Theorem 3 for the single case where COLOUR(s) = RED, c = BLACK, and op = WRITE. Recent work by Goguen and Meseguer,21 which allows the precise description of a very general class of security policies, may allow this ad-hoc technique to be given a formal basis. An elementary example of the application of this veri- fication technique (and a comparison with some others) may be found in.7 Present work is aimed at the verification of a complete security kernel described by Barnes.22 The work described here actually grew out of an attempt to formalize the informal arguments used to claim security for this ker- nel. References 1. T.A. Berson and G.L. Barksdale~Jr.~, "KSOS - Develop- ment Methodology for a Secure Operating System," _____ __________ ______ 48, pp. 365-371 (1979). 2. B.D. Gold~et_al.~, "A Security Retrofit of VM/370," _____ __________ ______ 48, pp. 335-344 (1979). 3. A. Hathaway, "LSI Guard System Specification (type A)," Draft, MITRE Corporation, Bedford, MA. (July 1980). 4. J.P.L. Woodward, "Applications for Multilevel Secure Operating Systems," _____ __________ ______ 48, pp. 319-328 (1979). 5. S.R. Ames~Jr.~, "Security Kernels: Are they the Answer to the Computer Security Problem?," Presented at 1979 WESCON Professional Program, San Francisco, CA. (September 1979). 6. J.M. Rushby, "The Design and Verification of Secure Systems," _____ ___ ___ _________ __ _________ ______ ___________ pp. 12-21, Asilomar, CA. (December 1981). (ACM Operating Systems Review, Vol. 15, No. 5). 7. J.M. Rushby, "Verification of Secure Systems," Internal Report SSM/9, (August 1981). 8. J.P. Anderson, "Computer Security Technology Planning Study," ESD-TR-73-51 (October 1972). (Two volumes). 9. C.R. Attanasio, P.W. Markstein, and R.J. Phillips, "Penetrating an Operating System: a Study of VM/370 Integrity," ___ _______ ________ 15, 1, pp. 102-116 (1976). 10. B. Hebbard~et_al.~, "A Penetration Analysis of the Michigan Terminal System," ___ _________ _______ -16- _______ 14, 1, pp. 7-20 (January 1980). 11. R.R. Linde, "Operating System Penetration," _____ ____ _______ ______ 44, pp. 361-368 (1975). 12. A.L. Wilkinson~et_al.~, "A Penetration Study of a Bur- roughs Large System," ___ _________ _______ _______ 15, 1, pp. 14-25 (January 1981). 13. G.H. Nibaldi, "Proposed Technical Evaluation Criteria for Trusted Computer Systems," M79-225, MITRE Corpora- tion, Bedford, MA. (1979). 14. G.J. Popek and D.A. Farber, "A Model for Verification of Data Security in Operating Systems," _____ 21, 9, pp. 737-749 (September 1978). 15. R.J. Feiertag, "A Technique for Proving Specifications are Multilevel Secure," CSL-109, SRI International, Menlo Park, CA. (January 1980). 16. J.K. Millen, "Security Kernel Validation in Practice," _____ 19, 5, pp. 243-250 (May 1976). 17. B.W. Lampson, "A Note on the Confinement Problem," _____ 16, 10, pp. 613-615 (October 1973). 18. Ford$, "KSOS Verification Plan," WDL-TR-7809, Ford Aerospace and Communications Corporation, Palo Alto, CA. (March 1978). 19. R.J. Feiertag, K.N. Levitt, and L. Robinson, "Proving Multilevel Security of a System Design," _____ ___ ___ _________ __ _________ ______ ___________ pp. 57-65 (1977). 20. F. Cristian, "Robust Data Types," Technical Report 170, (1981). (To appear in Acta Informatica). 21. J.A. Goguen and J. Meseguer, "Security Policies and Security Models," _____ ____ _________ __ ________ ___ ________ pp. 11-20, IEEE Computer Society, Oakland, CA. (April 1982). 22. D.H. Barnes, "Computer Security in the RSRE PPSN," ____ _____ ____ pp. 605-620, Online Conferences (June 1980).