The Challenges of Insider Misuse

Peter G. Neumann, SRI Computer Science Lab

Post-workshop version, 23 August 1999

Prepared for the Workshop on Preventing, Detecting, and Responding to Malicious Insider Misuse
16-18 August 1999, at RAND, Santa Monica, CA

This document is concerned with insider misuse. In particular, it characterizes the challenges of detecting and responding to insider misuse and analyzes how insider misuse differs from penetrations and other forms of misuse by outsiders. It considers primarily malicious insider misuse, but also examines what can correspondingly be done to address accidental insider misuse. It also considers the role of prevention insofar as it affects detection and response capabilities. It incidentally challenges a few of the common default assumptions, although that is only a byproduct of a secondary attempt to put the insider-misuse detection problem into a broader context in order to decompose the overall problem.

This document is organized according to the following questions that it addresses:

1. How does insider misuse differ from outsider misuse?  
    This question is considered with respect to
      1.1. Classes of insiders
      1.2. Classes of insider misuse
      1.3. Threats, vulnerabilities, and risks
      1.4. Exploitations of vulnerabilities
      1.5. Specification of policy for data gathering and monitoring
      1.6. Prevention of insider misuse
      1.7. Detection, analysis, and identification of misuse
      1.8. Desired responses

2. What is a sensible decomposition of the problem of effective
    insider detection and response?
      2.1. Development stages
      2.2. Operational aspects
      2.3. Security implications
      2.4. Anomaly and misuse detection in context
      2.5. Extended profiling including psychological and other factors
      2.6. Responses to detected insider misuse
      2.7. Important observations

3. What needs to be done that is not already being done, in terms of
    short-term and long-term R&D needs?
      3.1. General research and development directions
      3.2. Specific short-term research and development directions
      3.3. Specific long-term research and development directions
      3.4. Hierarchical and distributed correlation in particular

4. What overall conclusions are relevant to insider misuse?


This document does not need to justify the enormous importance of the insider threat. That should be axiomatic for this audience. Suffice it to say that there are many applications in which there are essentially no outsiders, and all threats necessarily come from people who must be considered to be trusted insiders of one form or another. The recent Los Alamos case, moles, bank fraud, and a long list of abuses by authorized users -- e.g., of IRS, law enforcement, and DMV databases -- speak to applications in which insiders have historically been important concerns in some organizations. Even in applications where outsiders are a problem, insiders may still represent huge risks. It must be remembered that today's COTS operating systems, DBMSs, end-to-end crypto systems, firewalls, and so on are generally not oriented toward preventing insider misuse, and that in many cases differential access controls are either nonexistent or are not used. If commercial multilevel-secure systems existed in the real world with any reasonable assurance, they might retard insider misuse; instead, many systems with sensitive information operate with all users and all information at a single level (``system-high''). Similarly, anomaly and misuse detection cannot prevent insider misuse, and commercial systems typically cannot even detect it. However, that technology needs to be able to detect insider misuse better than what can be done today, and also needs radically improved capabilities for responding accordingly.



1.1. Classes of Insiders

The differences among users may involve physical presence and logical presence. For example, there may be logical insiders who are physically outside, and physical insiders who are logically outside. For present purposes, we focus primarily on logical insiders and physical insiders.

Clearly there are different degrees of logical insiders, relative to the nature of the systems and networks involved, the extent to which authentication is enforced, and the exact environment in which a user is operating at the moment. A user may be an insider at one moment and an outsider at another. A user may be also be an insider within one operational frame of reference and an outsider at another.

For example, if a system supports multilevel security (or multilevel integrity, or even some form of multilevel availability or multilevel survivability, as defined in my ARL survivability report at, .pdf, or .ps), then the presence of compartments suggests that a user can be an insider in one compartment but an outsider in another compartment, or an insider at Top Secret but an outsider with respect to all compartments. In that a user may operate at different levels and compartments at different times, the concept of insider is both temporal and spatial. In some sense, all users of a Top Secret system could be called insiders, although they would appear to be outsiders relative to those others who were cleared into a particular compartment. Thus, everything is relative to the frame of reference -- what the user is trusted to be able to do, what privileges are required, and what data or programs are being referenced, whether the user authentication is strong enough to add credibility to user identities.

With respect to conventional operating systems, database management systems, and applications functioning as single-level systems (even if system high), at one extreme there are ordinary insiders who have passed the login authentication requirements; at the other extreme, there are users who are authorized to become superuser or an equivalent holder of ultraprivileges. However, consider a system in which the superuser privileges have been partitioned starkly and no one user holds all of the privileges, and where the granted privileges are insufficient to gain possession of all other privileges. (The iterative closure of static privileges augmented by privilege-changing privileges must be considered whenever we consider what privileges are actually attainable by a given user or group of collaborating users.) In that rather ideal case, we would have no complete insiders, but many different types of relative insiders. Unfortunately, in the absence of meaningfully secure systems and differential access controls that are properly defined, properly implemented, and properly administered, that ideal must remain more or less of a fantasy.

Thus, we are confronted with a wide potential range of insiders, and conclude that the notion of ``insider'' is necessarily multidimensional. For the rest of Section 1, we consider insiders generically, and do not attempt to make fine nuances among different kinds of insiders. We assume that relative to a particular computational framework, insiders are users who have been authenticated to operate within that framework; where necessary, we qualify that to include reference to the authorized privileges that are associated with a particular authentication. However, in several cases (particularly in Sections 1.2.2, 1.4.2, and 1.7) we make a distinction among insiders -- for example, between ordinary users on one hand and system administrators or other users with extreme privileges on the other hand.

1.2. Classes of Insider Misuse

Along with the range of meanings of the term ``insider'' is associated a variety of classes of insider misuse.

One immediate categorization of insider misuse involves intent, as in intentional versus accidental misuse. Even among intentional misuse, there is a wide range of possible actions -- from outright malice to relatively benign annoyance, with many degrees in between.

A second categorization involves the evidential nature of the misuse, that is, whether the misuse is intended to be detected or hidden. System and network denials of service are likely to be overt, in that they are readily obvious once they are enabled. On the other hand, insertion of stealthy Trojan horses that act as sniffers or that quietly leak information are typically intended to be covert, and their desired purpose may be to remain undetected for long periods.

Although the focus of the Insider Misuse workshop is primarily on intentionally malicious misuse, it is generally unwise to ignore accidental misuse. For example, the apparent success of what might be considered accidental misuse can easily inspire subsequent malicious misuse. Furthermore, it is generally unwise to ignore stealthy forms of misuse. To the extent that detecting accidental misuse can be dealt with by the same mechanisms that are used for intentional misuse, accidental misuse need not be treated separately. To the extent that stealthy misuse can be dealt with by the same mechanisms that are used for more obvious misuse, stealthy misuse need not be treated separately. However, remember that seemingly accidental misuse may in fact be intentional misuse in disguise, and stealthy misuse may be extremely dangerous; thus, it is potentially risky to ignore them altogether.

1.3. Threats, Vulnerabilities, and Risks

There are clearly differences in the nature of the threats. However, it must be remembered throughout that once an apparent outsider succeeds in penetrating a system boundary, he/she effectively becomes an insider from a mechanistic point of view. Although an insider might conceivably have greater knowledge of the environment, and may thereby present greater threats (see Section 1.3.2), the differences between insider threats and outsider threats are not stereotypically characterizable. Because of the very flaky operating system and networking infrastructures today, outsiders have little difficulty in carrying out nasty denial of service attacks and destructive integrity attacks, and in penetrating many systems. Virtual private networks may tend to complicate the situation, and demand special attention. Nevertheless, if a system complex has meaningful authentication, many of the outsider threats can be made much less riskful, whereas most of the insider threats clearly remain. Also, firewalls that are well-designed, well-implemented, and well-configured can help somewhat, but today are also largely vulnerable to many attacks (such as active pass-through attacks using http, Java, Active-X, PostScript, etc.). The presence of meaningful additional authentication for insiders could be useful in inhibiting masquerading (although the use of biometrics is probably premature in noncritical applications). In the presence of extensive monitoring, robust authentication may also help discourage misuse -- especially if the identity of the perpetrator can be established and traced reliably. This may be especially relevant to insider misuse, if the true identity of the apparent user can be unequivocally determined (subject to exploitations of operating-system vulnerabilities).

Insider threats are conceptually different, and it is useful to consider them in their own right. In today's systems, insider vulnerabilities and outsider vulnerabilities are both out of control. Serious efforts are needed to improve security and reliability of system and networks, and indeed to improve the overall survivability in the face of a wide range of adversities. In the presence of good external security, insider risks may be much more serious than outsider risks. In the presence of meaningfully precise access control policies and meaningfully secure differential fine-grained access controls, the nature of the insider misuse threats would change. This document assesses the situation as it exists today, how that situation might change, and what future research is essential to improve both detectability and response.

1.3.1. Threats

The following table itemizes some of the threats that appear to differ from outsiders to insiders. It ignores threats that are common to both outsider and insider perpetrators, such as carrying out personal attacks on individuals or corporations through an anonymous e-mail remailer, sending spams, creating monster viruses from a toolkit, creating risky mobile code, tampering with existing mobile code, intentionally crashing a system or component (although there are potentially mechanistic differences in causing crashes between insiders and outsiders), etc. Once again, remember that an outsider who has successfully penetrated a system has effectively become an insider, although the personal knowledge base may be different. Nevertheless, to simplify the table, penetrating outsiders are logically considered as outsiders unless they are knowledgeable enough to appear indistinguishable in a Turing-test sense from the insiders as whom they are masquerading -- as might be the case with disgruntled recent ex-employees.

Threats to Various Attributes of Security, According to Misuser Type
Attribute         Outsiders                 Insiders
---------------   -----------------------   -------------------------
Confidentiality   Unencrypted password      National security leaks
                  capture or compromise     and other disclosures;
                  of encrypted passwords    access to crypto keys
Integrity         Creating Trojan horses    Putting Trojan horses 
                  in untrusted components,  or trapdoors in trusted
                  Word macro viruses, and   and untrusted components;
                  untrustworthy Web code    x-in-the-middle attacks

Denials of        External net attacks,     Disabling of protected
Service           flooding, physical harm   components, exhaustion of 
                  to exposed equipment      protected resources

Authentication    Penetrations, attacks     Misuse of intended authority
                  on PKI/authentication     by over-authorized users,
                  infrastructures,          usurpation of Superuser,
                  war dialing               access to root keys

Accountability    Masquerading, attacks     Hacking beneath the audit
                  on accounting             trails, altering the logs,
                  infrastructures           compromising misuse 

Other misuses     Planting pirated          Pornography, playing games,
                  software on the Web       running a covert business,
                                            insider trading
In systems with weak authentication, there may be very naive outsiders who, when they subsequently appear as insiders, are very obviously distinguishable from experienced insiders. On the other hand, there may also be very naive insiders. Nevertheless, the knowledge used to perpetrate misuse may be of value to the analysis associated with detected misuses.

1.3.2. Knowledge Required and Knowledge Used

At least superficially, some differences typically exist in the knowledge available, the knowledge required for various types of misuse or already available without further study, experimentation, or effort, and the knowledge actually used in perpetrating insider misuse. Once again, those outsiders who are Turing-equivalent to insiders (as noted above) are considered as insiders.

For example, insiders might have greater knowledge of what to look for in terms of sensitive information and particularly vulnerable programs in which to plant Trojan horses. In system-high systems, insiders are already likely to be gratuitously granted information to which they do not necessarily need access. In compartmented multilevel-secure systems, users would have to be cleared, although that works both ways: a user not entitled to access a particularly compartment is effectively an outsider with respect to that compartment, and indeed may not even know of the existence of the compartment if the system is properly implemented and operational procedures are properly enforced. But users cleared into that compartment clearly have an enormous advantage over users who are not.

In the following table, we make a distinction among outsiders, ordinary insiders, and privileged insiders such as system administrators (who tend to be totally trusted), recognizing that we are lumping together users with common logical characteristics.

Typical Knowledge Gained and Used According to Misuser Type
Outsiders              Ordinary Insiders   Privileged Insiders
--------------------   -----------------   -------------------
Direct info and        Experience gained   Deep knowledge
inferences from        from normal use     from experience;
web info (such as      and experiments;    normal access to
penetration scripts),  familiarity with    Superuser status;
help files, social     sensitive files,    ability to create
engineering; chats &   project knowledge;  invisible accounts;
BBoards helpful        collusion easy      no collusion needed
1.4. Vulnerabilities and Risks

There are also differences in how vulnerabilities can be exploited, and the risks that may ensue.

1.4.1. Exploitations of Vulnerabilities

There is a likelihood that an experienced insider can operate very close to normal expected behavior (especially if engaged in a long-term effort at statistical-profile retraining), which would be more difficult to detect. This increases the need for a variety of analysis techniques and correlation (see below).

Today, we have pervasive deficiencies in authentication, operating system security, network security, and intelligently deployed access controls. Given the absurdly poor state of the art in defensive security, the differences between outsider exploitations and insider exploitations may be less relevant than they would be in the presence of good security.

Insider exploitations would become conceptually different in the presence of better system security, but would still present a problem. (3)

1.4.2. Potential Risks Resulting from Vulnerabilities

The potential risks may vary significantly from outsiders to ordinary insiders to highly privileged system administrators. However, it is in itself risky to give too much credence to these differences, because of several factors:

Thus, it is in general a huge mistake to conclude that outsiders cannot be as destructive as insiders, or vice versa. There are substantive differences in the potential risks.

           Potential Severity of Risks Incurred
Outsiders              Ordinary Insiders    Privileged Insiders
--------------------   ------------------   ---------------------
Very serious in        Potentially very     Extremely serious,
badly designed and     serious unless       even with strong
poorly implemented     strong separation    separation of roles,
systems, perhaps less  of roles or MLS;     and MLS levels and
serious with good      system-high systems  compartments; root
user authentication    are inherent risky   privileges risky
1.5. Specification of Policy for Data Gathering and Monitoring

COTS products for misuse detection tend to assume a collection of known vulnerabilities whose outsider exploitations are associated with known policy violations. Existing COTS intrusion-detection products are aimed primarily at penetrators, not at insiders. Policies for insider misuse tend to be strongly application-domain specific, and should dictate what is to be monitored, at what layers of abstraction. Thus, it is essential to have a well-defined policy that either explicitly defines insider misuse, or else a policy that explicitly defines proper behavior and implicitly defines insider misuse by exclusion.

A much better understanding of the application domain is needed for monitoring users for potential insider misuse. Also, more detailed data may need to be collected. Furthermore, when someone is suspected of devious behavior, it may be desirable to go into a fine-grain monitoring mode (such as capturing key-strokes).

Audit trails in existing systems are not necessarily adequate (although Solaris' BSM is more useful than many others). Today, commercial systems for intrusion detection rely on system audit trails, application logs, network packet sniffing, and occasionally physical sensors for their inputs. Similar sources of input data are necessary for detecting insider misuse. In either case, the analysis systems need to obtain some knowledge of the perpetrator if they are to trace the detected misuses back to their initiators. In closed environments, there can be much better user authentication than in open environments, although masquerading is still possible in many operating systems and application environments. Whenever that is the case, the actual choices of data to be gathered for insider-misuse detection could differ somewhat from that of intrusion detection. However, the existence of logical insiders who are physically outside and logical outsiders who are physically inside may make such distinctions undesirable. The necessary use of encryption in highly sensitive systems may also complicate the gathering of information on potential insider misuse.

1.6. Prevention of Insider Misuse

I am a firm believer in prevention where possible and then detection where prevention is not possible. For example, the Multics experience beginning in 1965 (see should have taught everyone a lot about the importance and value of prevention -- for example, defining what is meant by security, isolating privileged execution domains from less privileged executions (with 8 rings of protection), isolating one user from another while still permitting controlled sharing (via access control lists), access-checked dynamic linking and revocation, and using some sensible software-engineering concepts.

If there is no meaningful security policy to begin with, then the task of detecting and identifying deviations from that policy is very difficult. If there is no essential prevention in systems and networks, then even if there were a meaningful security policy, it is not possible to implement it. With respect to insiders, any enterprise operating within a system-high approach is inherently implying that there is no such thing as ``insider misuse'' because everything is permitted to all insiders. Thus, to have any hope of detecting insider misuse, we first need to know what constitutes misuse. Ideally, it would then be much better to prevent it rather than to have to detect it after the fact.

1.7. Detection, Analysis, and Identification of Misuse

On the other hand, in the absence of good prevention, it is of course desirable to detect known defined types of misuse (e.g., through rule-based detection) or otherwise unknown types of anomalous misuse (e.g., seemingly significant deviations from expected normal behavior). The latter type of detection could be particularly important in identifying early-warning signs.

Because there are potential differences in the data that may need to be collected, there may be some differences in the approach to detection of misuse among the different types of misuse, depending on the relative roles of insiders and insider misuse. If insiders can exist only within local confines (as in the case of a multilevel security compartment in a system with no remote users), it may be unnecessary to collect packets and other network data -- which itself constitutes a potential security risk. On the other hand, if privileged insiders are also able to access their systems remotely (for example, telnetting from outside) and are in some sense then indistinguishable from outsiders at least geographically or from their external Internet presence, then networking data may also be relevant. Clearly, the presence of strong authentication has an impact on carrying out insider misuse detection.

Similarly, there may be differences in how long input data for an anomaly and misuse platform needs to be retained. If the intent is to gather sufficient information to prosecute insider misusers, then the situation is quite different from insider misuse detection whose aim is to merely detect the presence of misusers so that other extrinsic methods (such as wiretaps, cameras, and physical surveillance) can be invoked. (These differences may also apply to outsiders -- although the relative priorities are likely to be different.) In general, long-term retention of raw audit logs and of digested (analyzed) data is recommended.

The differences in implementation of detection techniques between insider misuse and outsider misuse are not intrinsically different, in the sense that existing techniques can be applied to both. That is, the basic techniques for detection and analysis can be similar in both cases, in that the same mechanisms can generally used for both. However, in that existing COTS analysis systems do not tend to address insider misuse, some new techniques may be useful, particularly for interpretation and response.

For identification of hitherto unknown modes of insider misuse, some effort is required to apply statistical techniques. (Note that COTS analysis tools are typically not capable of detecting unknown forms of misuses -- unless the misuse serendipitously happens to trigger something resembling a known attack.) In the case of EMERALD, the basic statistical techniques are demonstrably applicable directly to detecting unknown insider misuse without modification of the statistical analysis engine.

For identification of the intent of the misuse, similar techniques can be useful. However, this is an area where greater research is needed.

As noted above, the COTS marketplace for intrusion detection is aimed primarily at detecting known attacks by outsiders. The idea of rapidly deploying an analysis system is meaningful for a given firewall, or for a given operating system, or for a given application for which a set of rules have already been written. Insider attacks tend to be much more domain specific, and thus the deployment of a system for insider analysis requires some detailed analysis of the threats and risks, some skilled implementation of rules, judicious setting of statistical parameters, and some further work on analysis of the results. This is not a straightforward off-the-shelf installation process. And it is very unwise to talk about intrusion detection in the context of insider misuse. The applicable term should be anomaly and (insider) misuse detection, as we have been very careful to do here. Please make this distinction in general, but especially whenever you are talking about insider misuse!

In a multilevel compartmented system/network environment, in which there are presumably no outsiders (each insider is trusted to operate in certain compartments) and in which the insider threat predominates, monitoring and analysis take on multilevel security implications, with many opportunities for covert channels. Monitoring can be done compartmentally, but aggregation, higher-level or cross-compartment correlation on an enterprise-wide basis present serious potential multilevel security problems. To our knowledge, no COTS products are currently ready for this.

More emphasis is needed on the not-well-known forms of misuse, on interpretation of detected anomalies, and hierarchical and distributed correlation. Much more emphasis is needed on tools to aid in the deployment and configuration of analysis tools for domain-specific applications. Serious effort should also be devoted to multilevel-secure analysis (and response).

1.8. Desired Responses to Detected Anomalies and Misuses

In many cases of outsider attacks (particularly denials of service), it is more important to stave off the attacks than to let them continue. In other cases, it may be appropriate to let the attacks continue but to somehow confine their effects. A similar range of responses exists for insiders. In some cases of insider misuse (particularly where the perpetrator has been identified and prosecution is anticipated), it may be particularly important to detect the misuse, to allow it to continue (perhaps under special system constraints and extended data gathering such as key-stroke capture), and monitor it carefully -- without giving away the fact that detailed surveillance is being done.

Thus, there are clearly differences in the desired responses that may be considered once misuses have been detected. However, the full range of possible responses may also be applicable to both insiders and outsiders -- although possibly in different degrees in the two cases. In any case in which continued misuse is allowed, serious risks exist that undetected contamination may occur and remain subsequently. This must be factored into any dynamic strategies for real-time response to detected misuse.



This section looks at insider misuse in the context of the bigger picture of security. It considers the development process (Section 2.1), operational aspects (2.2), security implications (2.3), and the effects of those issues on anomaly and misuse detection (2.4) and response (2.6). It also specifically addresses the importance of user profiling and the desirability of extending it to include psychological factors (2.5).

2.1. Development Stages

2.2. Operational Aspects(1)

2.3. Security Implications

2.4. Anomaly and Misuse Detection in Context

2.5. Extended Profiling Including Psychological and Other Factors

It is clear from the above discussion that detecting insider misuse must rely heavily on user profiling of expected normal behavior (although some use can be made of application-specific rules). Efforts to date have concentrated on relatively straightforward statistical measures, thresholds, weightings, and statistical aging of the profiles, independent of particular users. Considering that much is already known about insiders, it would seem highly desirable to include additional information in the profiles.

For example, our original conception of the NIDES resolver many years ago included the use of information about the physical whereabouts of each user, based on various different kinds of information that might help to direct the detection of potential misuse and interpret the results.

The combination of physical whereabouts and expected whereabouts could also be used to detect stolen badges or stolen authentication information of people who are highly trusted.

Existing statistical profiling (e.g., in NIDES and EMERALD) already has the capability of monitoring individualized computer activities, such as which editors the user prefers, which programming languages, which mail environment, which variants of commands, and so on. This has been done effectively in the past.

Personal on-line behavior can also be profiled statistically by extending the analysis information that is recorded, such as with whom an individual tends to exchange e-mail, which Web sites are visited regularly, and even what level of sophistication the user appears to exhibit. This is only a minor extension of what can be done today in EMERALD, for example.

There are also biological factors that might be monitored, such as how often a user gets up to walk around, go to the washroom, or go out of the building for a smoke (activities which themselves could be monitored by physical access controls!).

In environments in which monitoring key strokes is not considered intrusive, some effort has been made to monitor key-stroke dynamics. This approach tends to be much less reliable in general, particularly with confronted with network and satellite delays. Also, if you are typing with one hand because you are drinking a cup of hot coffee with the other hand, your typing dynamics go all to hell.

In addition to providing a real-time database relating to physical whereabouts, and extending statistical profiling to accommodate subtle computer usage variants, it would also be appropriate to represent certain external information regarding personal behavior, such as intellectual and psychological attributes.

As an example of an intellectual attribute, consider writing styles. There are already a few tools for analyzing natural-language writing styles. (Very few people write like PGN. However, the frequency of puns is probably very difficult to detect automagically.) Profiles of individual-specific ``mispelings'', the frequency of obscenities and the choice of explicit expletives, the relative use of obscure words, and measures of obfuscational proclivities and Joycean meanderings might also be quite useful.

Psychological factors do not seem to have been explored much in the past, especially in the context of insider misuse. Psychologists routinely observe certain standard behavioral characteristics and analyze deviations therefrom. Some of those characteristics that are particularly relevant to potential insider misuse might be modeled in extended user profiles. As one specific example, we might be able to develop measures of relative paranoia, based on how often a particular user invoked certain commands (such as finger) to observe who else might be observing what that user was doing in real time, or the use of aliases in posting to newsgroups. A measure of aggressive behavior could be interesting, but would probably require some human reporting of perceived relative hostility levels in e-mail messages received from a given individual. Measures of anger and stress levels in general computer usage could also be conceived. However, considerably more effort is needed to characterize which psychological attributes might be effectively utilized. However, in my opinion, this is not likely to have much success, because there are not well established characteristics, and human variabilities are likely to confound them anyway.

If this approach is considered possibly fruitful, we should approach a psychologist who is familiar with computer users and ask him or her to speculate on psychological factors that might be both computer detectable and behaviorally discriminative with respect to insider misuse, after having read whatever position papers have been written for the workshop. However, I have my doubts as to whether that would be fruitful.

Incidentally, MIT and IBM are involved in a joint project to detect a user's emotional state and modify a computer system's state accordingly. [See New Scientist, 8 May 1999.]

2.6. Responses to Detected Insider Misuse

Responses must be tailored to the detected and interpreted misuses, including recommendations for further real-time analysis, human investigation, immediate reactions such as reconfiguration, and intelligent responses based on additional derived knowledge. The basic response framework emerging in systems such as EMERALD will be directly applicable to responding to insider misuse, although different types of responses may be desirable -- such as triggering a more detailed monitoring mode or invoking human observation and possible intervention.

2.7. Important Observations

There is a basic gap in computer systems and networks between what kinds of system uses are intended and what uses are actually specified. In addition to that gap, there is a further gap between what is specified and what is actually possible (for example, because of flaws in the implementation of system security). Adequate controls of insider misuse suggest that better system security is necessary as one part of the solution.

There is a fundamental need for better differential access controls (access control lists, compartmentalized protection, fine-grain roles, etc.). There is also an enormous need for better user authentication to prevent intruders from gaining insider access and to provide positive identification of insiders that might diminish their ability to masquerade as other insiders and to otherwise hide their identities. Although conclusions relating to prevention and foresight are considered out of scope in the desired discussion on detection and response to insider misuse, such controls are absolutely essential in reducing the risks of insider misuse. Anyone who ignores the importance of authentication and constructive access controls is putting the cart before the horse. After all, what does unauthorized use mean when everything is authorized? (Recall the Internet Worm of 1988. Robert Morris was accused of exceeding authority; yet, no authority was required to use the sendmail debug option, the finger daemon, the .rhost mechanism, and the copying of encrypted but unprotected password files.) In the absence of a security policy on what access is supposed to be permitted, it is very difficult to ascertain what constitutes misuse. A far-reaching example of the impact of the difficulties thus presented is given by the PC virus detection problem, which would be a nonproblem if the PC software had any meaningful inherent security.

Despite the obvious truths that arise from the gross inadequacy of many existing systems, and precisely because of the risks that are thereby created, we focus here on detection and response to insider misuse, assuming that someone else has defined what is meant by insider misuse.



What needs to be done that is not already being done? We consider general research and development directions (Section 3.1), as well as specific short-term (3.2) and long-term (3.3) directions. Because of its fundamental importance in the future, correlation is treated by itself (3.4).

3.1. General Research and Development Directions

Overall, there is still much research and development work to be done in anomaly and misuse detection (as well as in intrusion detection).

3.2. Short-term Research and Development Directions

Each of the above general topics has short-term implications.

Immediate payoff can be gained from identifying and carefully defining certain high-risk types of insider misuse, and extending existing signature-based systems to detect misuse of those types.

Some short-term progress can also result in detecting anomalous insider misuse, although some human analysis will be necessary in many cases to determine the significance of those anomalies. That effort should be transitioned into long-term research in which automated analysis is effective, as noted below.

Early experiments should be conducted to demonstrate the feasibility of wider-scope correlation across different detection capabilities and different instances of misuse and anomaly detection systems.

Some short-term efforts should be conducted in monitoring the status of network management tools, and experiment with detecting network anomalies -- perhaps before they become serious problems. Initial efforts could also provide some simple automated analyses of recommended actions, such as system reconfiguration.

Some effort should be devoted to the short-term robustification of existing analysis platforms, at least to apply insider misuse detection to those platforms themselves, and to a few efforts that might make those platforms more resistant to tampering. Clearly, the longer-term efforts are important (see below), but some short-term activities could be very beneficial.

The insider misuse workshop could be an important contributor to exploring the idiosyncrasies of insider misuse and to elaborating on these and other research directions. (Was it?)

3.3. Long-term Research and Development Directions

Each of the above general topics has long-term implications, and indeed many of the above short-term topics can be transitioned into longer-term efforts. Indeed, it is important that some of the short-term R&D efforts be conceived as efforts that can evolve gracefully into longer-term efforts. Although the U.S. Government funding agencies are not normally good at such longer-term planning, such evolvability can to some extent be made a byproduct of the way in which the R&D is carried out -- for example, if good software engineering and evolvable architectures are requirements of the short-term efforts. This should be made a really important evaluation criterion for proposals for new work.

Applying anomaly detection to insider misuse has long-term implications, because it requires significant improvements in inference and hierarchical correlation.

Hierarchical and distributed correlation also has long-term needs for research, including new techniques for analysis.

In the long term, close integration with developers of network management monitoring platforms is highly desirable. Robust, stable, secure, reliable control of network reconfiguration as a result of detected anomalies and the resulting analysis is an important long-term goal.

Developing robust platforms for anomaly and misuse detection for insider threats is a problem that is exacerbated by the presence of insider misuse in the first place. Insiders are inherently more likely to be able to compromise the analysis platforms than outsiders. Thus, robustification becomes all the more essential in the long run.

3.4. Hierarchical and Distributed Correlation and Other Reasoning

In the oral presentation of our USENIX workshop paper (2), Phil Porras outlined a collection of research directions related to correlation that we are currently beginning to pursue. They are relevant here:

It is important to have abstract representations of the results so that aggregation can be done effectively across heterogeneous target environments and heterogeneous analysis platforms (e.g., different Unix variants, different Microsoft systems, etc.).

The EMERALD resolver represents another type of analysis engine that has the capability of mediating the results of rule-based components and profile-based components. For example, the resolver suppresses massive numbers of related statistical anomalies that are all related to a common cause. It also does some elementary correlation between the results of the expert-system component and the statistical component.

All of these capabilities expected to be studied in the context of the EMERALD development need to be researched much more broadly by the community at large.



There are a few relative differences in detecting insider misuse compared with outsider-initiated misuse, but these differences do not seem to be intrinsic. Instead, the differences involve what information might need to be gathered, what rules might be given to an expert system, what parameters might be used to tune the profile-based analysis, what priorities might be associated with different modes of misuse, and what urgency might be accorded to various responses. Some new inference tools might be useful, but they could also be developed generally enough to be applicable to outsider misuse as well.

One difference that is perhaps substantive involves the use of individual user profiles. The SRI/TIS Safeguard project demonstrated significant advantages of profiling systems and subsystems rather than profiling individual users. However, in a critical environment in which there is only a modest number of insiders and no outsiders, profiling individual users makes much greater sense.

The reality that COTS intrusion-detection tools are not oriented toward insider attacks, unknown forms of misuse, intelligent results interpretation, generality of target systems, and long-term evolvability presents a very significant reason to question the existing DoD COTS assumption. The reality that some of the COTS operating systems and networking software is so badly flawed and so easily subvertible also causes questions about the sanity of the COTS assumption.

The fact that multilevel security is not adequately implemented in either target systems or in analysis and misuse platforms need not be a handicap in the detection of insider misuse. We know how to build multilevel-secure systems out of nonmultilevel-secure user systems and trustworthy servers (e.g., see Norm Proctor and Peter Neumann, Architectural Implications of Covert Channels, in the 1992 National Computer Security Conference); furthermore, in the EMERALD work we have a strategy for achieving multilevel-secure monitoring, detection, analysis, and response. But again, COTS products are more or less inapplicable in the infrastructure -- although they can be used for untrusted single-level user systems (at least in less critical environments).

The approach of robustifying CIDF-compliant interoperable open-source analysis tools comes to the fore, as a major effort that DARPA and NSA should encourage rather than blindly ignore. I believe that you cannot get there from here otherwise, and that it is foolish to pour all of your money down the proprietary COTS path for anomaly and misuse detection tools that are not adequately applicable to insider misuse. Carefully directed research is essential, such as is outlined in the previous section. Better system security is essential(3) -- particularly if it can be achieved in open-source environments.(4)



Operational practice and immediate palliatives are important, particularly as they apply to the anomaly and misuse detection platforms themselves. They must be included in the characterization of the problem -- because they are part of the solution, without which detection and response are rendered ineffective, and because their inadequacies directly feed into the needs for insider-misuse detection.  
See the paper by Peter Neumann and Phil Porras, Experience with EMERALD to Date, April 1999, pages 73-80 (Best Paper Award!), Proceedings of the 1st USENIX Workshop on Intrusion Detection and Network Monitoring Workshop; this paper discusses some of the research directions as well as the desirable characteristics of future-seeking systems for anomaly and misuse detection: . Recent results with EMERALD in using the profile-based statistical techniques suggest that it is possible to dramatically reduce the total number of false alarms despite massive amounts of data. See also the paper by Ulf Lindqvist and Phil Porras, Detecting Computer and Network Misuse with the Production-Based Expert System Toolset (P-BEST), 1999 IEEE Symposium on Security and Privacy: clicking on ``emerald'', then on ``reports''.  
There are enormous benefits that would result from intrinsically better operating system security, network security, authentication, and encryption. One of those benefits is that the detection and response problem would be more more precise, rather than encompassing all of security (or the lack of it), as is the case at the present. This is absolutely vital, and must not be ignored.  
DoD is strongly endorsing the use of proprietary commercial off-the-shelf software, most particularly including some unsecure unreliable nonsurvivable Microsoft systems. But as I state in this document, there are no COTS ``intrusion-detection'' systems that are good at detecting hitherto unrecognized insider misuse. Furthermore, I believe that the Microsoft monoculture is in the long run extremely dangerous for DoD and the nation. I also believe that efforts to robustify open-source software can result in enormous payoffs for DoD, especially with some support and guidance from NSA and DoD; if nothing else, those efforts could serve to inspire the COTS developers into producing better systems. If you all roll over and play dead the way parts of DoD have done thus far (with the Yorktown dead in the water, Cloverdales, Melissas, Website takeovers, and frequent hostile denial of service attacks), DoD may run itself into the ground. The existing COTS strategy is doomed as long as the most commonly used products are so weak from the perspective of security, reliability, and survivability -- although I recognize that at the same time there are other advantages. Alternatives to the monoculture must be sought, and can lead to massive improvements. Cyberdiversity is essential to the future. This is particularly true when applied to CIDF-compliant anomaly and misuse detection tools and the systems and networks that they analyze.  
For example, at the First USENIX Intrusion Detection Workshop 2 on 12 April 1999, Ed Amoroso mentioned AT&T's experience with Net Ranger (now provided by Cisco). When he and his colleagues finally tried to install it, months after receiving the CD, they discovering that certain files were missing from the installation CD. When they complained to Cisco, the Cisco folks indicated they had never before heard about this problem; apparently no one had successfully installed it!
See the workshop report for further background.


  Peter G. Neumann, Principal Scientist
  Computer Science Lab, SRI International EL-243
  333 Ravenswood Ave
  Menlo Park CA 94025-3493
    Tel 1-650/859-2375
Incidentally, my testimony for a hearing of the House Science Committee subcommittee on technology for Thursday 15 April 1999 is on-line at, and is part of the official hearing record (although it was not presented orally). The hearing is devoted to Melissa, but my testimony considers that topic as a very small piece of the bigger picture. Insider misuse is addressed, among other things.