NOTE: Reuse for commercial purposes is subject to CACM and author copyright policy.

Earlier Inside Risks columns can be found at http://www.csl.sri.com/neumann/insiderisks.html. 2007 columns are directly accessible at http://www.csl.sri.com/neumann/insiderisks07.html.

Inside Risks Columns, 2006

  • Liability Risks with Reusing Third-Party Software, William Hasselbring, Matthias Rohr, Jürgen Taeger, and Daniel Winteler, December 2006
  • COTS and Other Electronic Voting Backdoors, Rebecca Mercuri, Vincent J. Lipsio, and Beth Feehan, November 2006
  • Virtual Machines, Virtual Security, Steven Bellovin, October 2006
  • The Foresight Saga, Peter G. Neumann, September 2006
  • Risks of Online Storage, Deirdre K. Mulligan, Ari Schwartz, and Indrani Mondal, August 2006
  • Risks Relating to System Compositions, Peter G. Neumann, July 2006
  • EHRs: Electronic Health Record or Exceptional Hidden Risks, Robert Charette, June 2006
  • Risks of RFID, Peter G. Neumann and Weinstein, May 2006
  • Fake ID: Batteries Not Included, Lauren Weinstein, April 2006
  • Real ID, Real Trouble?, Marc Rotenberg, March 2006
  • Trustworthy Systems Revisited, Peter G. Neumann, February 2006
  • Software and Higher Education, John C. Knight and Nancy G. Leveson, January 2006
  • If you wish to see earlier Inside Risks columns, those through December 2003 are at http://www.csl.sri.com/neumann/insiderisks.html. Columns for 2004 are at http://www.csl.sri.com/neumann/insiderisks04.html, and for 2005 are at http://www.csl.sri.com/neumann/insiderisks05.html.

    ========================================================

    Inside Risks 198, CACM 49, 12, December 2006

    Liability Risks with Reusing Third-Party Software,

    William Hasselbring, Matthias Rohr, Jürgen Taeger, and Daniel Winteler

    Reuse of well-tried components promises cost-effective construction of high-quality software systems. With this approach, third-party software components are usually integrated into complex software systems. The risks of using such components of different suppliers are both technical and judicial. Some technical problems were discussed by Peter G. Neumann in the July 2006 Inside Risks column. Here we discuss emergent judicial concerns, as they apply in Europe.

    From the judicial point of view, a risk exists for system vendors to be held liable for malfunctions that are not self-induced, without being able to take recourse to the supplier delivering the faulty component. The resulting problems are evident in the context of German legislation. These problems apply throughout the European Union, due to an EU council directive concerning liability for defective products (No. 85/374/EEC of July 25th, 1985).

    The reason for the additional liability risks when re-using third-party software components is this: The integrator/vendor of systems composed of several software components from different suppliers is in general liable to his customers as well as to the general public for any malfunction of the whole system. If the product does not function properly, the customer will confront the vendor -- for example, to get his money back or to seek an abatement of the price. Even worse, if the system is safety critical, he might sue the vendor for damages. The same is true for any injured persons who are not the contractual partner of the vendor. The vendor might have to pay huge sums because of damage to goods or people. The customer and the injured party have the advantage that an exact localization of the fault is not necessary for their claim. They merely have to provide evidence that a malfunction of the product caused the damage. The vendor then has little chance of warding off the claims. In particular, according to the European directive 85/374/EEC, no contractual derogation is permitted as regards the liability of the producer in relation to the injured person; all producers involved in the production process could be made liable, insofar as their finished product, component part, or any raw material supplied by them was defective.

    After having paid for damages, the system vendor might consider taking recourse to one of his suppliers, or more precisely, the one supplier that delivered the defective component. In contrast to the injured party, the vendor now has the problem that he must localize the fault within his component-based system. He can point his finger to a supplier only if he can prove that this supplier's software component failed and contains the responsible fault as root-cause of the failure. Fault diagnosis and fault localization in complex component-based systems present a real challenge, because they can require the complete reconstruction of the error-propagation process among all components and the identification of the conditions that activated the root-cause of the failure.

    An additional difficulty for software forensics is the transient state, which is usually lost at the time of system failure. The error propagation process within a system composed of software components does usually not create ``visible'' indications about single component failures. In contrast, error propagation processes in hardware usually create physical indication towards the fault locations. For instance, if a laptop burns, it should be possible to prove that a particular hardware component such as the battery burned first (although this does not necessarily mean that this component was responsible).

    Component specifications are used to formalize provided services, and may define what is required from the execution environment or by other components in order to provide correct service. These component specifications could be part of requirements specifications and even of formal contracts between system vendor and component suppliers. However, such specifications are often ambiguous, incomplete, and neglecting precise metrics for quality properties. Conversely, failures may be caused by improper protocols governing interactions among multiple components; thus, the system composer may be responsible for failures himself. Here, technical and judicial issues meet. In our interdisciplinary Graduate School TrustSoft (www.trustsoft.org), we integrate research in Computer Science and Law to address these problems.

    Wilhelm Hasselbring (hasselbring@informatik.uni-oldenburg.de) is a professor of software engineering and chair of the graduate school TrustSoft at the University of Oldenburg. Matthias Rohr (matthias.rohr@informatik.uni-oldenburg.de) is a computer science Ph.D. student in the graduate school TrustSoft at the University of Oldenburg. Jürgen Taeger (j.taeger@uni-oldenburg.de) is a professor of law and supervisor in the graduate school TrustSoft at the University of Oldenburg. Daniel Winteler (daniel.winteler@uni-oldenburg.de) is a law Ph.D. student in the graduate school TrustSoft at the University of Oldenburg.

    ========================================================

    Inside Risks 197, CACM 49, 11, November 2006

    COTS and Other Electronic Voting Backdoors

    Rebecca Mercuri, Vincent J. Lipsio, and Beth Feehan

    During the U.S. 2006 primary election season, there was a flurry of media attention about electronic voting, when it was revealed that Diebold Election Systems had erroneously reported to a testing authority (CIBER) that certain Windows CE operating system files were commercial-off-the-shelf (COTS) but in fact also contained customized code. This is important because, remarkably, all versions of the federal voting system guidelines exempt COTS hardware and software from inspection, whereas modified components require additional scrutiny.

    This loophole is anathema to security and integrity. In other critical computer-based devices (e.g., medical electronics or aviation), COTS components may be unit-tested once for use in multiple products, with COTS software typically integration-tested and its source code required for review. In contrast, for voting equipment, this blanket inspection exemption persists, despite having strenuously been protested by numerous scientists, especially in the construction of guidelines authorized by the Help America Vote Act (HAVA) [1]. Nevertheless, special interests have prevailed in perpetuating this serious backdoor in the advisory documents used for the nation's voting system testing and certification programs.

    Indeed, Diebold dismissed the discovered customizations as presenting only ``a theoretical security vulnerability that could potentially allow unauthorized software to be loaded onto the system'' [2]; a Diebold spokesman commented ``for there to be a problem here, you're basically assuming ... you have some evil and nefarious election officials who would sneak in and introduce a piece of software. ... I don't believe these evil elections people exist.'' But such naivete is laughable, as there is a long and well-documented history of such ``political machines'' and operatives in the U.S.

    Uninspected COTS has caused other serious voting equipment problems to go undetected, even if tampering is not an issue, as reported in 2001 to the U.S. House Science Committee by Douglas Jones, when he related a 1998 example of ``an interesting and obscure failing [with the Fidlar and Chambers EV 2000] that was directly due to a combination of this exemption and a recent upgrade to the version of Windows being used by the vendor ... the machine always subtly but reliably revealed the previous voter's vote to the next voter.'' [3]

    The strong resistance to closing this COTS backdoor was illustrated by the activities of the IEEE's P1583 Voting System Standards working group, while they were drafting a document to be submitted as input to the Election Assistance Commission's (EAC) Technical Guidelines Development Committee. A Special Task Group (STG) was formed to resolve COTS-related issues in the draft. Although all issues were resolved with strong consent by the STG's members [4], P1583's vendor-partisan editing committee unabashedly repeatedly refused (even after having been confronted before the entire working group) to incorporate any of the substantial COTS review requirements into the draft. Therefore, the version of the document released to the EAC still contained the exemption for COTS components, even though the working group had decided otherwise.

    Numerous other aspects of America's voting equipment certification process are similarly lax. Another P1583 working group member, Stanley Klein, repeatedly pointed out to the EAC that the legacy low 163-hour Mean Time Between Failures rate specified in all versions of the voting system guidelines translated to an election day malfunction probability (potentially resulting in unrecoverable loss of votes) of 9.2% per machine, to no avail. Attempts to require a Common Criteria style evaluation were frustrated. Bizarrely, the guidelines allow for the risky use of wireless transceivers in voting machines, but do not require that the ballot data be provided in a format such that it is independently auditable. And although there is a federal certification process, there is no provision for decertification, even when a major security flaw has been exposed. The fact that any changes, including security-related ones, require recertification, has even been used as an excuse to avoid making needed updates. Indeed, the nature of U.S. elections is such that federal certification, as poor as it is, is not mandatory; one-fifth of the states have chosen to disregard it, some in lieu of even more haphazard and obfuscated examination processes.

    This distressing situation will likely continue until large numbers of citizens, especially those with technical expertise, hold government officials accountable. You can help by communicating with your elected officials, beseeching them to do something about this now.

    Beth Feehan (bfeehan@comcast.net) is a researcher focusing on HAVA implementation issues. Vincent Lipsio (vince@lipsio.com) is a software engineer who specializes in real-time and life-critical systems. Rebecca Mercuri (mercuri@acm.org) is a forensic computing expert who has been researching electronic voting since 1989.

    1. Charles Corry, Stanley Klein, Vincent Lipsio, and Rebecca Mercuri, Comments to the Election Assistance Commission's Technical Guidelines Development Committee, December 2004. http://www.vote.nist.gov/ECPosStat.htm
    2. Monica Davey, New Fears of Security Risks in Electronic Voting Systems, New York Times, May 12, 2006.
    3. Douglas Jones, Testimony to the U.S. House Science Committee, May 22, 2001. http://www.cs.uiowa.edu/~jones/voting/congress.html
    4. IEEE P1583 working group. http://www.Lipsio.com/COTS, http://grouper.ieee.org/groups/scc38/1583/

    ========================================================

    Inside Risks 196, CACM 49, 10, October 2006

    Virtual Machines, Virtual Security

    Steven Bellovin

    Virtual machines are once again a hot trend in system configuration, as demonstrated by the emergence of VMware, Xen, and a renewed interest in hardware assists for virtualization. Some uses are clearly beneficial: virtual machines are great for hosting websites and servers, because VMs avoid the use of multiple computers to support different applications running on diverse operating systems, while at the same time providing more facile load balancing.

    VMs are also touted as a solution to the computer security problem. At first blush, it seems obvious that they should help with security. After all, if you're running your browser on one VM and your mailer on another, a security failure by one shouldn't affect the other. There is some merit to that argument, and in some situations it's a good configuration to use. But let's look a little more closely. To simplify things, let's pretend that the two are actually on physically separate machines, and ignore all issues of bugs in the virtual machine monitor, contention for (and denial of service relating to) shared resources, greater complexities resulting from diverse system administration, and so on. All of these are, in fact, real issues, but they're not the fundamental problem.

    Most mailers provide a handy feature: they recognize URLs in inbound email, and let the user click on them. The mailer must therefore talk to the browser, and tell it to open a window (although invoking random URLs is dangerous!). Similarly, some Web pages may invoke the mailer to send mail. A consequence is that the two machines can't be completely separate; some communication must exist between the two. Therein lies the trouble: What is the interface between the two virtual machines? More generally, what is the interface between those components and the rest of the user's environment? After all, people save web pages and email messages, print them, edit them, and more.

    A danger lurks herein. If a buggy or subverted mailer can read and write files without limit, it can commit all of the abuses that today's buggy, subverted mailers perpetrate. It can also commit mailer-only abuses, such as propagating worm-infected email messages.

    The desired solution is also clear: the VM's interface to the base system and to other virtual machines must be limited and dependably controlled. Some restrictions must be imposed, some of them possibly complex.

    Let's consider another scenario: suppose the mailer, the web browser, etc., run on a single machine, with separate user IDs from the normal login environment. We could use access control lists on functionality rather than on users to grant the same sorts of permissions and impose the same sort of restrictions as in virtual machines. What is the difference? Does the VM overhead buy us anything?

    Even with physically separate systems, additional restrictions such as firewalls are generally required. Using virtual machines as a separation primitive can provide some further assurance, which is generally a good thing. That said, access control has been generally reliable in operating systems: very few security holes have involved failures of the permission-checking mechanisms. More problems have resulted from inappropriate allocation of a single privilege level or subversion of privileged or setuid programs.

    The incremental benefits of using virtual machines must be carefully considered. If we wish to use isolation of dubious applications as a security primitive, the weak point is the policy specification, not its enforcement. Specifying fine-grained permissions has always been difficult.

    Virtual machines can carry a set of disadvantages, too. Even ignoring performance issues, there can be significant administrative overhead. Simple virtual machines require as much configuration work per VM as separate physical machines would. Careful attention to process configurations will be needed to ensure that we do not create worse system administration problems than we have today.

    That said, virtual machines do help with one important class of problems: comparatively isolated services. If a service does not require much interaction, little need exists for complex policies. In that case, a VM can be a simpler form of isolation than essentially independent disconnected separate machines.

    Steven M. Bellovin is a professor of computer science at Columbia University. [Note: Particularly relevant in this regard are separation kernels (J.M. Rushby, The Design and Verification of Secure Systems, Proceedings of the Eighth ACM Symposium on Operating System Principles, ACM Operating Systems Review, 15(5), 12--21, December 1981), which indeed could be used as a basis for implementing controlled inter-VM sharing, and recent work on Multiple Independent Levels of Security (MILS). Also, see Paul Karger, Limiting the Damage Potential of Discretionary Trojan Horses, Proc. 1987 IEEE Symposium on Security and Privacy, pp. 32--37. PGN]

    ========================================================

    Inside Risks 195, CACM 49, 9, September 2006

    The Foresight Saga

    Peter G. Neumann

    In hindsight, the lack of foresight is an old problem that keeps recurring. The most prevalent situation seems to be ``we had a backup system, but it failed when it was needed.'' For example, a power blackout shut down the Los Angeles Air Route Traffic Control Center in Palmdale CA for three hours on the evening of July 18, 2006, following a power outage and automatic cutover to the backup power system -- which subsequently failed. Two separate human errors in the Palmdale Center silenced Los Angeles area airport communications in September 2004, when both the main system and the backup failed. A power failure disrupted Reagan National Airport on Apr 10, 2000, for almost 8 hours, when the backup generator also failed. At the Westbury Long Island air traffic control center in June 1998, a software upgrade failed, as did reverting to the old software. The new LA-area El Toro air traffic control computer failed 104 times in a single day in 1989, and its predecessor was unavailable -- having already been decommissioned. The new British Swanwick air traffic control system suffered a failure after an attempted software upgrade in June 2004; it took two hours to restore the backup system, halting air traffic throughout England. In 1991, an AT&T standby generator was accidentally misconfigured, draining the backup batteries, closing the three major NY airports.

    The Swedish central train-ticket sales/reservation system and its backup both failed. The Washington D.C. Metro Blue Line computer system and its backup both failed on Jun 6, 1997. For three consecutive days of attempted software upgrades, San Francisco's BART computers crashed, which was complicated by a backup failure as well.

    On one occasion, Nasdaq experienced a power failure, after which the backup power system also failed. On another occasion, a software upgrade caused a computer crash, and the backup system failed as well. In November 2005, a software bug caused the worst-ever Japanese stock exchange system crash; the monthly software upgrade failed, and the backup also failed (using the same software). In 1999, the Singapore Stock Exchange outage crashed repeatedly due to erroneous interactions with the backup system. Further problems arise when constructive uses of operational redundancy fail their intended purpose. In December 1986, the ARPANET had 7 dedicated trunk lines between NY and Boston, except that they all went through the same conduit -- which was accidentally cut by a backhoe. Quite a few similar problems have resulted from localized failures of seemingly parallel power, communications, or aircraft hydraulic lines. Furthermore, common-mode failures can be particularly problematic in distributed systems, even with redundancy designed to ensure high reliability (e.g., majority voting). For example, Brunelle and Eckhardt describe a case in which two independent faulty programs had similar bugs and consistently outvoted the correct one. [J.E. Brunelle and D.E. Eckhardt, Jr., Fault-Tolerant Software: An Experiment with the SIFT Operating System, Fifth AIAA Computers in Aerospace Conference, 1985, 355-360.] Furthermore, suppose that multiple subsystems all share the same security vulnerabilities; then, if one system can be compromised, so can all of them -- at the same time -- despite redundancy. Similarly, using an n-error correcting code when more than n errors are likely is a poor idea.

    A power surge shut down the 9 Mile Point nuclear station in Oswego NY, and the supposedly uninterruptible backup power failed as well, triggering a site-area emergency. An Australian TV channel went off the air due to multiple system failures and a power outage; failure of the backup system took down the national phone system. New York City's 911 system crashed during a test of the backup generator; the backup system failed for an hour, the main system for 6 hours. In 1998, a malfunction of the Galaxy IV satellite onboard control system caused massive outages of U.S. pager service, with the backup switch also failing. In other examples, in which no recovery was possible, the NY Public Library lost all of its computerized references, and the Dutch eliminated an old system for managing criminals before successfully cutting over to the new system.

    Further problems arise when constructive uses of operational redundancy fail their intended purpose. In December 1986, the ARPANET had 7 dedicated trunk lines between NY and Boston, except that they all went through the same conduit -- which was accidentally cut by a backhoe. Quite a few similar problems have resulted from localized failures of seemingly parallel power, communications, or aircraft hydraulic lines. Furthermore, common-mode failures can be particularly problematic in distributed systems, even with redundancy designed to ensure high reliability (e.g., majority voting or Byzantine agreement algorithms that typically tolerate k out of 3k+1 arbitrary failures). For example, a case is noted in which two independent faulty programs had similar bugs and consistently outvoted the correct one. (J.E. Brunelle and D.E. Eckhardt, Jr., Fault-Tolerant Software: An Experiment with the SIFT Operating System, Fifth AIAA Computers in Aerospace Conference, 1985, 355-360.) Furthermore, suppose that multiple subsystems all share the same security vulnerabilities; then, if one system can be compromised, so can all of them -- at the same time -- despite redundancy. Similarly, using an n-error correcting code when more than n errors are likely is a poor idea.

    In all of these cases, evident shortcomings existed in the design, implementation, and testing of backup and recovery facilities. It is difficult to test for situations that occur very rarely. On the other hand, if backup and recovery facilities have to be exercised frequently, the overall system is probably poorly designed and operated. Backup and recovery represent processes that must be carefully integrated with their associated systems, with suitable quality control, periodic reverification, and maintenance of compatibility. Considerable foresight, systemic planning, and periodic testing are essential.

    ========================================================

    Inside Risks 194, CACM 49, 8, August 2006

    Risks of Online Storage

    Deirdre K. Mulligan, Ari Schwartz, and Indrani Mondal

    Exponential leaps in online storage capacity, along with a sharp drop in storage costs, have made it possible for Internet users to store large amounts of data online. Under current law, a consumer's personal communications and records in electronic storage with an ISP or other service provider receive less privacy protection than those same communications in transit, stored on the consumer's own computer, or hard copies stored in the home. Unless the law catches up, loss of privacy may be a hidden and unintended price of these new services.

    Protecting user privacy in this new environment requires revisions to the federal Electronic Communications Privacy Act (ECPA), improvement and clarity in industry policies for stored data, and user education. Outside the U.S., other nations are grappling with similar issues, but given the vast scope and complexity of the problem and differences in applicable law this article focuses solely on practices and law in the U.S.

    Three main areas of concern relate to user privacy: (1) diminishing relevance of traditional constitutional search and seizure rules, (2) lack of transparency and clarity regarding ISP practices in storing or deleting subscriber emails, and (3) legal uncertainty surrounding what ISPs can do with users' personal information and communications.

    The traditional sources of legal privacy protections for electronic data are the Fourth Amendment and ECPA. The Supreme Court held that the Fourth Amendment protects a person's home and the content of his telephone calls from unreasonable search and seizure. While the Court has never explicitly ruled on email, it has been assumed that the same protection would apply to the contents of an email in transit.

    In a series of cases in the 1970s, the Supreme Court held that the Fourth Amendment does not apply to personal information voluntarily disclosed to a business. These ``business record'' decisions predated the digital revolution. There are serious questions whether the doctrine remains constitutionally sound, given the revealing nature of the vast quantity of data, email, photographs, and online diaries that individuals store electronically with businesses. It is time to reconsider the limits of the business records doctrine as applied to electronically stored data.

    ECPA, which relied on a broad interpretation of the "business records" cases, is also outdated. Under ECPA, stored email is afforded less privacy protection than email in transit, and the level of protection afforded to stored email depends on the length of time it has been stored. This means that the level of privacy protection given to email can change many times within an email's life -- changes that the vast majority of consumers do not recognize or understand. Compounding this issue is a lack of transparency from ISPs regarding the deletion of stored data. How long does an email, or other data, remain on an ISP's servers after a user deletes it? (Often, ``deleted'' email will remain on backup storage unbeknownst to users.) Will ISPs automatically delete older emails from their servers without notifying users? Each ISP should clearly communicate its policies to customers.

    Congress should eliminate the distinctions ECPA makes based on an email's age, status as opened or unopened, or the type of provider who retains it, and should amend ECPA to require a search warrant for the government to access stored email content.

    Another problem with ECPA was highlighted in a 2004 appeals court decision noting that an ISP could read stored subscriber emails for its own business purposes without user consent. ECPA should be amended to clarify that ISPs may read subscribers' emails only to provide the service, to protect the ISP's rights or property, or in other limited circumstances.

    ECPA also provides insufficient guidance in civil litigation. ECPA does not provide a means for accessing the contents of email communications in the context of civil litigation, and sets no limits on the disclosure of other data about users to private parties including civil litigants.Legislation here is essential. A subpoena, at least, should be required for disclosure of subscriber identifying information and subscribers should receive notice prior to the release of any personal information.

    The online storage revolution has outpaced privacy protections. Legal reform, improved industry practices, and consumer education are necessary to meet consumers' privacy expectations as their personal communications and records are remotely, digitally stored.

    Deirdre K. Mulligan is a Clinical Professor of Law at UC Berkeley, Ari Schwartz is the Deputy Director of the Center for Democracy & Technology (CDT), Indrani Mondal was a summer intern at CDT. This column is adapted from their paper, Storing our lives online: Expanded email storage raises complex policy issues, I/S: A Journal of Law and Policy for the Information Society, January 2005.

    ========================================================

    Inside Risks 193, CACM 49, 7, July 2006

    Risks Relating to System Compositions

    Peter G. Neumann

    The challenge of developing systems with complex sets of requirements seems to be inherently complex, despite persistent advice to keep it simple. However, consider the goal of building trustworthy systems using predictably sound compositions of well-designed components along with analysis of the properties that are preserved or transformed, or that emerge, from the compositions. Conceptually, that should considerably simplify and improve development, especially if we follow the paths of David Parnas, Edsger Dijkstra, and others. Unfortunately, there is a huge gap between theory and common practice: system compositions at present are typically ad hoc, based on the intersection of potentially incompatible component properties, and dependent on untrustworthy components that were not designed for interoperability --- often resulting in unexpected results and risks.

    Composition is meaningful with respect to many entities such as requirements, specifications, protocols, hardware/software implementations, and evaluations. The ability to predict the effects of such compositions analytically might follow simply from a proactive combination including good system architecture, system design, sound software engineering, sensible programming languages, compilers, and supporting tools, with attention to assurance throughout development. However, various difficulties can hinder predictable composition:

    * Inadequate requirements and architectures. ``If a program has not been specified, it cannot be incorrect; it can only be surprising.'' (W.D. Young, W.E. Boebert, and R.Y. Kain, Proving a Computer System Secure, Scientific Honeyweller, 6, 2, 18--27, July 1985) Its composability with other programs is also likely to be surprising. Unfortunately, system specifications are always inherently incomplete, even though we desire that they completely and correctly define the relationships among modules and their interfaces, inputs, state transitions, internal state information, outputs, and exception conditions.

    * Poor software engineering. The absence of sensible abstraction, modular encapsulation, information hiding, and other sound principles (e.g., J.H. Saltzer and M.D. Schroeder, The Protection of Information in Computer Systems, Proceedings of the IEEE, 63, 9, 1278--1308, September 1975, http://www.multicians.org) can seriously impede composability, as can riskful programming languages, undisciplined programming practices, and unwary use of analysis tools.

    * Multiparty incompatibilities. If proactively designed, heterogeneous systems could mix and match alternative components, enabling interoperability among disparate developers. However, incompatibilities among interface assumptions, the existence of proprietary internal and external interfaces, and extreme performance degradations resulting from the inability to optimize across components can result in compositional snafus.

    * Scalability issues. Composability can create scalability problems. Performance may degrade badly or nonpredictably as subsystems are conjoined. Further degradations can result --- for example, from design flaws, software bugs, and indirect effects of the composition.

    Other aspects of composability also are relevant.

    * Policy composability. Policies for security, integrity, privacy, and safety components (especially when incomplete) often cannot compose without contradictions and unrecognized emergent properties.

    * Protocol composability. Network and cryptographic protocols are a source of risks. Compositions often introduce security flaws and performance problems. For example, end-to-end solutions often ignore denial-of-service attacks.

    * Assurance composability. Ideally, we should be able to reason about entire systems based on the properties of their components, perhaps even with provably sound combinations of provably sound components. That is, analytic techniques should be composable. However, if the properties are not composable, the analysis results probably aren't either. (For example, see de Roever et al., Concurrency Verification, Introduction to Compositional and Noncompositional Methods, Cambridge Tracts in Theoretical Computer Science no. 54, Cambridge University Press, New York, NY, 2001.)

    * Certification composability. Deriving system certifications from the components is also fraught with hidden problems. For example, John Rushby has characterized the main issues relating to certification of aircraft based on certifications of their components (Modular Certification, SRI Computer Science Laboratory, Sept 2001; http://www.csl.sri.com/rushby/abstracts/modcert). The crucial elements involve separation of assumptions and guarantees (based on ``assume-guarantee reasoning'') into normal and abnormal cases.

    Various approaches to enable predictable composability are noted in PGN, Principled Assuredly Trustworthy Composable Architectures, December 2004, with many references: http://www.csl.sri.com/neumann/chats4.html and .pdf Dramatic improvements are needed in system development and particularly in design for evolvability. More research is needed to establish unified theories and practices that would enable predictable compositions. Furthermore, these concepts must be brought into the development and education mainstreams. In any event, composability represents a very important hard problem, especially in legacy software.

    [Note: This column has received some criticism that (in theory) composition should be almost trivial. We remind our readers that the difference between theory and practice is evidently enormous.]

    ========================================================

    Inside Risks 192, CACM 49, 6, June 2006

    EHRs: Electronic Health Record or Exceptional Hidden Risks

    Robert Charette

    The problem of the rising costs and uneven quality of healthcare is a worldwide concern. Industrialized countries spend on average 10% of their GDP on healthcare, with the U.S. spending nearly 15% in 2005. For comparison, U.S. defense spending is 4% of GDP. Left unchecked, U.S. healthcare costs are projected to rise to 25% of GDP within a generation as the U.S. population ages. Similar cost increases are projected for industrialized countries as well.

    Over the past decade, several countries such as Australia, the UK, and the U.S. have started information technology initiatives aimed at stemming rising healthcare costs. Central to each of these initiatives is the creation of electronic health record (EHR) systems that enable a patient's EHR to be accessed by an attending healthcare professional from anywhere in the country.

    The benefits claimed for EHRs are that by being able to quickly and accurately access a person's entire health history, deaths due to medical errors (estimated to be 100,000 a year in the U.S. alone) will be drastically cut, billions of dollars in medical costs will be saved annually, and patient care will be significantly improved. Experience at the U.S. Veterans Affairs and Department of Defense (among the largest users of EHRs in the world) supports many of the benefits claimed. [Note: DoD's AHTLA system currently contains 7.2 million electronic beneficiary records.]

    However, the attempts at creating national EHR systems have been encountering difficulties. In Australia, the implementation cost has risen from an estimated AU$500m in 2000 to AU$2b today. In the U.K., the implementation costs have risen from an estimated 2.6b pounds in 2002 to at least 15b pounds today. In the U.S., the ``working estimate'' for a national EHR system runs between $100b and $150b in implementation costs with $50b per year in operating costs.

    The U.K. Connecting for Health initiative calls for everyone in the U.K. to have EHRs by 2008. However, there have been ongoing problems with its implementation that have spurred 23 leading U.K. computer-science academics to write an open letter to the Parliament's Health Select Committee in April, recommending an independent assessment of the basic technical viability. In their letter, they ask whether there is a technical architecture, a project plan, a detailed design, assessments of data volumes and traffic loads, adequate resiliency in the design, as well as conformance with data and privacy laws, and so on. [Note: Physicians find it difficult to use, patient records have disappeared, suppliers developing the software have run into financial difficulties, privacy of medical records is not assured, etc.]

    The U.S. approach to creating a national EHR system differs from the U.K. approach. Whereas the U.K. EHR system is publicly funded, the U.S. has decided to adopt a market-based approach, where the government acts as technology coordinator and adoption catalyst. Instead of funding the building of a single, integrated networked system with a central EHR database as in the U.K., the U.S. government is facilitating the definition of standards to allow the interoperability of commercially available EHR systems as well as interoperability certification standards. The U.S. government has high hopes for EHRs, and views the development of an EHR system merely as a technological catalyst for changing how healthcare is delivered and paid for. [Note: As the head of the Department of Health and Human Services has stated, the administration is counting on them to among other things ``save Medicare''.]

    Some concerns are already arising with the U.S. initiative and whether its objective of providing most U.S. citizens EHRs by 2014 is realistic. For example, the government's initiative has been chronically underfunded from the start. Medical researchers, pharmaceutical companies, insurance companies, EHR vendors, and other interested parties cannot agree on what functionality, form of information capture, and record access a national EHR system should support. Physicians working in small medical practices worry about the costs involved.

    Whereas many of the issues encountered merely reflect specific instances of generic software system development problems, their number, complexity, and potential personal and political impacts magnify their importance. Thus, concerns arise from the absence of both a full business case for a national EHR system and a comprehensive risk assessment and management plan outlining the potential social, economic, and technological issues involved in creating and operating the system. As the U.K. is discovering, focusing on the technology of electronic medical records without considering the myriad socioeconomic consequences is a big mistake.

    The implementation of a national EHR system presents an opportunity to constructively transform healthcare in the U.S. Whether it does will depend in large part on how well the relevant benefits and risks are understood and managed.

    Robert Charette (Charette@itabhi.com) is the founder of ITABHI Corporation, Spotsylvania VA 22553-4668.

    ========================================================

    Inside Risks 191, CACM 49, 5, May 2006

    Risks of RFID

    Peter G. Neumann and Lauren Weinstein

    Like most other technologies, RFID (Radio-Frequency IDentification) systems have pluses and minuses. In their most common applications, passive RFID tags enable rapid contactless determination of the tags' serial numbers, in theory helping to reduce erroneous identifications. However, this technology becomes dangerous whenever the binding between the tag and its context of use is in doubt. This situation is similar to Social Security Numbers, which are themselves useful as identifiers but not as authenticators, with a wide range of abuses.

    RFID benefits may be negated by numerous opportunities for accidental or intentional misuse of the technology and its supporting systems, along with system malfunctions, poorly designed human interfaces, complexities in coping with massive amounts of support data, and a wide range of issues relating to system and data integrity, personal well-being, and privacy -- some of which are related to surveillance and tracking (clandestine or otherwise).

    Tags may be counterfeited, cloned (duplicated), swapped, damaged, intentionally disabled (in some cases even remotely), or otherwise misused. Scanners for tags and their supporting computer software and database management systems need to be securely embedded into secure systems. RFID technology can be easily compromised if used with insecure systems. This is especially problematic in sensitive environments if RFID tags use unencrypted or (as in the case of the impending U.S. passports) weak encryption protocols.

    Numerous major privacy comsiderations exist, some of which can have serious consequences if they are ignored or relegated to second-order considerations. Tags are potentially subject to remote surveillance whenever they are unshielded. Testing indicates that even passive RFID tags may be interrogated over far greater distances than originally anticipated. The implications of these problems are immense for persons bearing RFID-enabled credit cards or passports, not to mention individuals with embedded subcutaneous RFID implants -- who would have no ability to control when and where these implants may be interrogated.

    Furthermore, various issues related to pervasive security problems can lead to increased privacy violations committed by insiders and outsiders, such as misuses of databases associated with RFID tag information or derived from the context in which the tags are used. System-related examples include intrinsic security vulnerabilities of the ancillary computer systems, inadequate user and operator authentication, and overly broad system and database authorizations (e.g., the lack of fine-grained database access controls). Such situations can create rampant opportunities for misuse of the accompanying database information. For example, many opportunities will exist for targeting specific victims, widespread selective data-mining, and sweeping up entire databases (``vacuum-cleaning''). Possible intents for such misuses might include robbery, identity theft, fraud, harassment, and blackmail, for example.

    With the range of potential problems associated with RFID systems, the question of voluntary vs. involuntary use becomes paramount; when things go wrong, somebody is likely to be hurt -- financially or in other more serious ways. It's not unlikely that involuntary RFID-implant ``chipping'' of perceived miscreants will be happen soon. It's then probably just a matter of time before much broader forced deployment will permeate society, justified by organizations and authorities based on security, financial, or other seemingly laudable goals.

    At the basic computer-science level, inadequate security in operating systems, database management systems, networking, and other components supporting the use of RFID technology are sorely in need of improvement. Consistent, correct, and up-to-date distributed databases are essential for system availability and survivability. Several R&D directions might be helpful, although these are not limited to the use of RFID technologies in their implications. Particular needs include the ability to develop trustworthy systems, with suitable security, accountability and auditing, binding integrity, pseudoanomymity where appropriate, privacy-preserving cryptography, accountability, and so on.

    RFID-related technologies can have some attractive benefits in certain carefully delineated situations. However, in all cases, possible technical and privacy risks must be considered objectively in operational environments. Even more importantly, it's crucial that we engage now in a far-reaching, society-wide dialog regarding the circumstances and contexts within which RFID systems should or should not be used, and the rights of individuals and organizations to control whether or not they will be subject to various uses of these systems. This is an especially difficult task, because many of the would-be applications are emotionally charged, and RFID capabilities and ostensible benefits are in some cases being outrageous hyped far beyond what is realistic. Yet it is such critical deliberations that will likely influence whether RFID will be deployed primarily in useful tools, or rather as oppressive identity shackles.

    Lauren Weinstein (lauren@pfir.org) is co-founder of People For Internet Responsibility (http://www.pfir.org). He moderates the Privacy Forum (http://www.vortex.com/privacy).

    NOTE: A very nice relevant article just appeared in WiReD, May 2006, While You Were Reading This, Someone Ripped You Off, by Annalee Newitz.

    ========================================================

    Inside Risks 190, CACM 49, 4, April 2006

    Fake ID: Batteries Not Included

    Lauren Weinstein

    It was only a matter of time. We've come to expect almost anything imaginable to be sold on late-night TV infomercials -- from feel-good ``health'' bracelets to ``get rich quick'' real-estate schemes. So I shouldn't have been too surprised to stumble across a 3a.m. full-hour ad for a firm offering biometric ``appliances'' (for legal applications only -- the superimposed fine print notes -- not responsible for customer misuse!).

    I couldn't help but be reminded of an old "Twilight Zone" episode written by Ray Bradbury, where motherless siblings pick out their desired component parts of an android "mother" -- this color eyes, that color hair, the perfect musical voice. For that's what this grotesque infomercial was selling: artificial fingers, false eyeballs, voice simulators, full-face latex disguises worthy of a ``Mission Impossible'' caper. And much more.

    While the energetic announcer never came right out and said so, it was clear from the outset that the only possible real purpose for these devices was to defeat biometric identification systems. Admittedly, many such systems perform so poorly that it doesn't take rocket science to fake them out, but the folks behind the commercial still made a very convincing pitch. They also appeared to be well versed on the current state of both biometric and associated spoofing technologies. The ad even made fun of the now infamous homemade ``gummy bear'' false fingerprint technique, noting how this firm's professional, custom-made fingers (with exchangeable fingerprint tips) were automatically maintained at perfect body temperature to fool most scanners (pair of AAA batteries for heating the finger not included).

    The false eyeballs for sale are apparently especially versatile, as it was shown how they can be held in the hand for either retinal or iris scanning units, or even worn (as a sort of half-shell) over the user's own eye(s) for scanners that are pickier about such configurations (the latter a scenario straight out of James Bond movies). The voice simulator device being promoted ("Beep Throat 2000") -- obviously for defeating voice ID systems -- was perhaps the most mundane device in the lot, but was still a pretty slick all-digital affair, designed to be both utilitarian and utterly inconspicuous.

    At one point, it was even suggested that a possible application for their products was for bad-spirited practical jokes. The ad's simulated demonstrations included a man who discovered that his fingerprints and iris patterns had been usurped and discredited by some interloper using those products. Realizing that he had no way to change his own biometrics, he chopped off his index finger with a Ginsu knife and stuck a shish kebab skewer into his eye in frustration. Really funny stuff, huh?

    But it got even worse. In addition to their physical appliances product line, the same fine Bahamas-based firm offered what could only be described as a correspondence course in database hacking. They suggested that another type of fun could be yours by corrupting and manipulating biometric databases on either a small or large scale, all from the comfort of your home computer while sitting in bed. No false fingers or fake eyeballs required for this approach. Now that's entertainment (broadband Internet connection and Windows XP required).

    By the end of this potpourri of identity horrors, my head was spinning. The commercial explained how to gather fingerprints to send in for finger fabrication, and noted how you could order a little infrared gadget that would collect iris and retinal data -- even at a considerable distance without the knowledge of your target -- for completing that aspect of your order. They even suggested that prospective customers ``ask the operator'' at the 800 number about ``odor index'' and ``DNA fabrication services'' that were also available.

    It was with some relief that this nightmarish apparition of an ad finally ended, with a splash of garish synthesizer music and the usual promises of a money back guarantee if not completely satisfied (``minus shipping & handling'', of course). If nothing else, I was again reminded of the risks of late-night burritos keeping me awake, and the mental hazards of tuning in satellite TV stations above channel "999".

    As I finally tried to doze off, visions of dismembered fingers and spinning eyeballs still invaded my thoughts, and I couldn't help but muse on the implications of that 60-minute descent into the identification inferno that I had just endured.

    For all of the promotional hype surrounding biometric ID systems, it's probable that they're actually setting the stage for a whole vast new range of abuses and problems, which will make us long for the ``good old days'' of passwords that could be easily changed when compromised. Perhaps the nightmare was actually only just beginning, after all. Good night.

    Lauren Weinstein (lauren@pfir.org) is co-founder of People For Internet Responsibility http://www.pfir.org. He moderates the Privacy Forum http://www.vortex.com/privacy.

    [Incidentally, the CACM printed version of this column has a cautionary note referring to the fact that this appears in the April issue. Also, this html version differs slightly from the printed version.]

    ========================================================

    Inside Risks 189, CACM 49, 3, March 2006

    Real ID, Real Trouble?

    Marc Rotenberg

    According to the report of the 9-11 Commission, all but one of the 9/11 hijackers acquired some form of U.S. identification, some by fraud. Acquisition of these forms of identification would have assisted them in boarding commercial flights, renting cars, and other necessary activities. As a result, the Commission and some lawmakers concluded that it was necessary for the federal government to set technical standards for the issuance of birth certificates and sources of identification, such as drivers licenses. The result was the REAL ID Act of 2005. [2]

    The new law states that beginning in 2008, ``a Federal agency may not accept, for any official purpose, a driver's license or identification card issued by a State to any person unless the State is meeting the requirements of this section.'' [3] This means that the Department of Homeland Security will issue the technical standards for the issuance of the state drivers license. The practical impact, as CNET explained is that ``Starting three years from now, if you live or work in the United States, you'll need a federally approved ID card to travel on an airplane, open a bank account, collect Social Security payments, or take advantage of nearly any government service.'' [4] And even some of the more conservative commentators in the US have expressed concerns about ``mission creep.'' [5]

    Several objections have been raised about the plan, including privacy and cost, but the most significant concern may be security. As Bruce Schneier has explained, ``The biggest risk of a national ID system is the database. Any national ID card assumes the existence of a national database, and that database can fail. Large databases of information always have errors and outdated information.'' [6] Even if the identity documents are maintained in the states, problems are likely.

    One example concerns the vulnerability of the state agencies that collect the personal information that is used to produce the license. In 2005, the burglary of a Las Vegas Department of Motor Vehicles put thousands of driver's license holders at risk for identity theft. The information of at least 8,738 license and ID card holders was taken during the break-in, and reports of identity theft have already surfaced. [7] Another report uncovered 10 ``license-for-bribe'' schemes in state DMVs in 2004. [8]

    Not surprisingly, the administrators of the state license systems are among those most concerned about the proposal. As the Director of Driver Services in Iowa said, ``It's one thing to present a document; it's another thing to accept the document as valid. Verifying digital record information is going to be difficult.'' The National Conference of State Legislatures was more emphatic, ``The Real ID Act would cause chaos and backlogs in thousands of state offices across the country, making the nation less secure.''

    The National Academy of Sciences anticipated many of these challenges when it said in 2002 that the U.S. should carefully consider goals of nationwide identification system. As the Academy explained, ``The goals of a nationwide identification system should be clarified before any proposal moves forward. Proposals should be subject to strict public scrutiny and a thorough engineering review, because the social and economic costs of fixing an ID system after it is in place would be enormous.'' [9]

    The problems of building reliable systems for identification are not unique to the United States. Many countries around the world are confronting similar questions. [10] In Great Britain, for example, a national debate continues about the creation of a new identity card. The Government contends that the card is essential for combating crime, illegal immigration, and identity theft, and can be achieved for an operating cost of 584 million pounds per year. But a report from the London School of Economics challenged a number of the government positions and a subsequent report found further problems with the ID plan. [11]

    The UK expert group concluded, ``ID requirements may actually make matters worse.'' The LSE report cited a recent high-profile breach: ``Even as cards are promised to be more secure, attacks become much more sophisticated. Most recently, Russian security agents arrested policemen and civilians suspected of forging Kremlin security passes that guaranteed entrance to President Vladimir Putin's offices.''

    Systems of identification remain central to many forms of security. But designing secure systems that do not introduce new risks is proving more difficult than many policy makers had imagined. Perhaps it's time for the proponents of expanded identification systems to adopt the cautionary line from Hippocrates: ``Primum Non Nocere.'' [12]

    Marc Rotenberg is Executive Director of the Electronic Privacy Information Center (EPIC) and former Director of the ACM Washington Office. More information at http://www.epic.org and http://www.acm.org/usacm; e-mail rotenberg@epic.org.

    1. National Commission on Terrorist Attacks Upon the United States, 9/11 Commission Report, p. 390 (2004)
    2. Background information on the law is at EPIC, National ID Cards and Real ID Act. http://www.epic.org/privacy/id_cards/
    3. The bill and the legislative history is available at the Thomas web site, REAL ID, H.R. $18. http://thomas.loc.gov/cgi-bin/bdquery/z?d109:h.r.00418:
    4. Cnet News.com (May 6, 2005)
    5. ``It's not hard to imagine these de facto national ID cards turning into a kind of domestic passport that U.S. citizens would be asked to produce for everyday commercial and financial tasks.'' Wall Street Journal (February 19, 2005).
    6. Bruce Schneier, Beyond Fear: Thinking Sensibly about Security in an Uncertain World.
    7. Las Vegas Review-Journal (June 3, 2005)
    8. Los Angeles Times (May 31, 2005).
    9. Stephen T. Kent and Lynette I. Millett, Editors, Committee on Authentication Technologies and Their Privacy Implications, National Research Council, IDs -- Not That Easy: Questions About Nationwide Identity Systems (National Academies 2002). http://www.nap.edu/catalog/10346.html?onpi_topnews_041102
    10. A comprehensive survey of ID developments around the world may be found in Marc Rotenberg and Cedric Laurant, Privacy and Human Rights: An International Survey of Privacy Laws and Developments (EPIC 2004)
    11. London School of Economics, The Department of Information Systems, The Identity Project: an assessment of the UK Identity Cards bill and its implications, (June 2005) http://is.lse.ac.uk/idcard/identityreport.pdf; London School of Economics, Research Status Report, pp. 7, 10 (January 2006). http://is.lse.ac.uk/idcard/statusreport.pdf
    12. ``Above all else, do no harm." More about the Hippocratic Oath: NOVA Online -- The Hippocratic Oath -- Classical Version http://www.pbs.org/wgbh/nova/doctors/oath_classical

    [Note: This is an extended version of what appeared in the CACM.

    ========================================================

    Inside Risks 188, CACM 49, 2, February 2006

    Trustworthy Systems Revisited

    Peter G. Neumann

    System trustworthiness is in essence a logical basis for confidence that a system will predictably satisfy its critical requirements, including (for example) information security, reliability, human safety, fault tolerance, and survivability in the face of wide ranges of adversities (including malfunctions, deliberate attacks, and natural causes).

    Our lives increasingly depend on critical national infrastructures (power, energy, telecommunications, transportation, finance, government continuity, and so on) -- all of which in turn depend in varying degrees on the dependable behavior of computer-communication resources, including the Internet and many of its attached computer systems.

    Unless certain information system resources are trustworthy, our critical systems are at serious risk from failures and subversions. Unfortunately, for many of the key application domains, the existing information infrastructures are lacking in trustworthiness. For example, power grids, air-traffic control, high-integrity electronic voting systems, the emerging DoD Global Information Grid, the national infrastructures, and many collaborative and competitive Internet-based applications all need systems that are more trustworthy than we have today.

    In this space, we have frequently considered risks associated with such systems and what is needed to make them more trustworthy. This month's column takes a higher-level and more intuitive view by considering analogies with our natural environment -- expectations for which are rather similar to expectations for trustworthy information systems. For example, pure air and uncontaminated water are vital, as are the social systems that ensure them.

    Although poorly-chosen analogies can be misleading, the analogy with our natural environment seems quite apt. Each of the following bulleted items is applicable to both trustworthy information systems and natural environments.

    * Their critical importance is generally underappreciated until something goes fundamentally wrong -- after which undoing the damage can be very difficult if not impossible.

    * Problems can result from natural circumstances, equipment failures, human errors, malicious activity, or a combination of these and other factors.

    * Dangerous contaminants may emerge and propagate, often unobserved. Some of these may remain undetected for relatively long periods of time, whereas others can have immediately obvious consequences.

    * Your well-being may be dramatically impeded, but there is not much you as an individual can do about aspects that are pervasive -- perhaps international or even global in scope.

    * Detection, remediation, and prevention require cooperative social efforts, such as public health and sanitation efforts, as well as technological means.

    * Up-front preventive measures can result in significant savings and increases in human well-being, ameliorating major problems later on.

    * Once something has gone recognizably wrong, palliative countermeasures are typically fruitless -- too little, too late.

    * As we noted in Optimistic Optimization (CACM, June 2004), long-term thinking is relatively rare. There is frequently little governmental or institutional emphasis on prevention of bad consequences.

    * Many of the arguments against far-sighted planning and proactive remediation are skewed, being based on faulty, narrowly scoped, or short-sighted reasoning.

    * Commercial considerations tend to trump human well-being, with business models sometimes considering protection of public welfare to be detrimental to corporate and enterprise bottom lines.

    In some contexts, pure water is becoming more expensive than oil. Fresh air is already a crucial commodity, especially for people with severe breathing and health problems. Short- and long-term effects of inadequately trustworthy information systems can be similarly severe. Proactive measures are as urgently needed for system trustworthiness as they are for breathable air, clean water, and environmental protection generally. It is very difficult to remediate computer-based systems that were not designed and implemented with trustworthiness in mind. It is also very difficult to remediate serious environmental damage.

    Anticipating and responding to compelling long-term needs does not require extraordinary foresight, whether for air, water, reversing global warming, or trustworthy systems upon which to build our infrastructures. Our long-term well-being -- perhaps even our survival -- depends on our willingness to consider the future and to take appropriate actions.

    Peter Neumann moderates the ACM Risks Forum. He is Principal Scientist in the Principled Systems Group of the Computer Science Lab at SRI International. This column was inspired by an article by Tim Batchelder, ``An Anthropology of Air'', Townsend Letter for Doctors and Patients, pp. 105--106, November 2005. ``Because [air] is negative space, it is difficult to see the value in preserving it.''

    ========================================================

    Inside Risks 187, CACM 49, 1 January 2006

    Software and Higher Education

    John C. Knight and Nancy G. Leveson

    With software playing an undeniably critical role in our lives, one would expect that the best engineering techniques, such as rigorous specification and systematic inspections, would be applied routinely in its development. But in our experience, the opposite is often the case. Many large and important software development projects are conducted with poor choices of engineering techniques and technologies -- resulting in increased risks, costs, and delivery delays.

    The reason that this situation has arisen is that those making technical decisions often do not have the necessary engineering training needed to make good decisions. The blame for this limitation lies heavily on deficiencies in the computer science and computer engineering degree programs to which many people turn for their professional education. Here are some deficiencies that we have noted:

    * There is too little emphasis in these degree programs on the principles of software development; the emphasis at present is on popular detail and not principles. Important topics such as specification and testing techniques tend to be taught rarely and superficially if at all. Graduates know the syntax of the language du jour and details of the associated libraries, but not how to work with other engineers to specify, design, or test a system. Coverage of all the major elements of software development (including requirements, specification, design, and verification) must be included in degree programs, in such a way that graduates understand what choices are available and how to use them.

    * Education about the role of computers in systems is frequently limited to web applications and simple "isolated computer" assignments. It is essential that students get training in working with other engineering disciplines. This should take the form of examining significant example systems (both good and bad), as well as multi-disciplinary projects. Students might also be encouraged to take courses in other disciplines so as to see examples of how computer systems are used.

    * Many topics are taught from too narrow a perspective. Student exposure to software design, for example, is often limited to a shallow treatment of object-oriented design; graduates are often unaware of the myriad of details of object-oriented design or that there are other approaches such as event-driven systems, table-structured designs, and functional decomposition. A carefully selected set of techniques in each area needs to be presented to the student, and then the student needs to use at least one or two of the techniques to gain insight. It is also important for students to be exposed to appropriate comparative analysis of the techniques, and to have opportunities to discuss and evaluate the material in various engineering contexts.

    * There are important areas of the discipline, such as real-time and embedded systems, that are seldom taught and almost never in depth. There is a perception that such topics are somehow specialized and rarely required. In practice, many computer systems are embedded, and many do operate in real time. Degree programs must be broad enough to ensure that graduates understand the spectrum of systems being built and the techniques that they require. This could be achieved relatively easily if topics such as real-time systems were introduced as part of a more established topic such as design by showing how real-time issues impact design decisions.

    * Finally, graduates have a professional responsibility to know their limitations as the developers of crucial artifacts upon which our future depends, but the details of this responsibility are not taught extensively or as a priority. As well as being taught the right principles in degree programs, graduates must also be taught what they do not know, what their professional limits are, and when to seek help from others.

    Faculty involved in these degree programs should ascertain where their graduates go, and, if many go into software development, they should ask whether these graduates are receiving the right education from the courses the faculty teach. If they are not, then either the degree programs need to be improved or it needs to be made clear that the degree programs are not designed to prepare people for a career in software development. A better alternative might be for more institutions to do what a few have done already, develop degrees in software engineering.

    ========================================================