CACM Inside Risks

Here is a collection of the recent Inside Risks columns articles from the Communications of the ACM, plus some selected earlier columns that are particularly important. All copyrights held by authors. Reuse for commercial purposes is subject to CACM and author copyright policy.

Following the clickable table of contents, these columns are given in REVERSE CHRONOLOGICAL ORDER -- most recent first. In order not to break existing hot links to old columns, and to minimize waiting time, but also to allow this file to remain the primary one in the future, the columns in 2004 through 2007 are in separate files, requiring an indirect download through this file. Columns since October 2010 are linked in their actual final pdf form from the CACM -- usually minus the copyrighted artwork. (The html form of earlier columns may differ very slightly from the published appearance.)

The text here is not necessarily completely identical to the actual printed versions, in which some ACM editing has taken place (for example, due to space limitations). Of some particular interest recently, discussion of computer-related voting can be found in the columns of January 2001, November 2000, June 2000, and in earlier columns November 1993, November 1992, and November 1990 appended below, which you will find in the continuation of the menu. Other columns prior to December 1997 can be added on request.

[[[NOTE: After 18 years of 216 consecutive monthly columns that have appeared in the inside last page of the Communications of the ACM, Insider Risks columns have subsequently appeared three times a year in the CACM Viewpoints section. I am enormously indebted to the long-standing members of my ACM Committee on Computers and Public Policy (Steve Bellovin, Peter Denning, Virgil Gligor, Nancy Leveson, Dave Parnas, Jerry Saltzer, Lauren Weinstein, Jim Horning [until 18 Jan 2013], and more recently Kevin Fu, Ben Zorn, and Zeynep Tufekci), whose diligent oversight and incisive interactions have helped make these columns relevant, timely, interesting, and appropriate for the CACM readership.]]]

========================================================

Certification of Safety-Critical Systems: Seeking new approaches toward ensuring the safety of software-intensive systems, Nancy Leveson, CACM, October 2023.
  • Computer-Related Risks and Remediation Challenges: Surveying the nontechnical issues interwoven with computer-related technologies, Peter G. Neumann, CACM, June 2023.
  • Toward Total-System Trustworthiness; Considering how to achieve the long-term goal to systemically reduce risks. Peter G. Neumann, June 2022.
  • The Risks of Election Believability (or Lack Thereof), Rebecca T. Mercuri and Peter G. Neumann, June 2021. A corresponding video interview is online, and both are also available with a single URL.
  • A Holistic View of Future Risks, Peter G. Neumann, October 2020.
  • How to Curtail Oversensing in the Home: Limiting sensitive information leakage via smart-home sensor data. Connor Bolton, Kevin Fu, Josiah Hester, and Jun Han, June 2020.
  • Are You Sure Your Software Will Not Kill Anyone? Using software to control potentially unsafe systems requires the use of new software and system engineering approaches. Nancy Leveson, February 2020.
  • How Might We Increase System Trustworthiness? Summarizing some of the changes that seem increasingly necessary to address known system and network deficiencies and anticipate currently unknown vulnerabilities, Peter G. Neumann, October 2019.
  • Through Computer Architecture, Darkly: Total-system hardware and microarchitectural issues are becoming increasingly critical, A. Theodore Markettos, Robert N.M. Watson, Simon W. Moore, Peter Sewell, and Peter G. Neumann, June 2019. DOI:10.1145/3325284>
  • The Big Picture: A systems-oriented view of trustworthiness, Steven Bellovin and Peter G. Neumann, November 2018. DOI:10.1145/3277564>
  • Risks of Cryptocurrencies: Considering the inherent risks of crypto ecosystems, Nicholas Weaver, June 2018
  • Risks of Trusting the Physics of Sensors: Protecting the Internet of Things with embedded security. Kevin Fu and Wenyuan Xu, February 2018
  • The Real Risks of Artifical Intelligence: Incidents form the early days of AI research are instructive in the current AI environment, David Lorge Parnas, October 2017
  • Trustworthiness and Truthfulness Are Essential: Their absence can introduce huge risks, Peter G. Neumann, June 2017
  • The Future of the Internet of Things: The IoT can become ubiquitous worldwide -- if the pursuit of systemic trustworthiness can overcome the potential risks, Ulf Lindqvist and Peter G. Neumann, February 2017
  • Risks of Automation: A Cautionary Total-System Perspective of Our Cyberfuture, Peter G. Neumann, October 2016
  • The Risks of Self-Auditing Systems: Unforeseen problems can result from the absence of impartial independent evaluations. Rebecca T. Mercuri and Peter G. Neumann, June 2016
  • Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications, Harold Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Michael A. Specter, Daniel J. Weitzner, October 2015
  • Routing Money, Not Packets: Revisiting Network Neutrality, Vishal Misra, June 2015
  • Far-Sighted Thinking about Deleterious Computer-Related Events: Considerably more anticipation is needed for what might seriously go wrong. Peter G. Neumann, February 2015
  • Risks and Myths of Cloud Computing and Cloud Storage: Considering existing and new types of risks inherent in cloud services, Peter G. Neumann, October 2014
  • EMV: Why Payment Systems Fail: What lessons might we learn from the chip cards used for payments in Europe, now that the U.S. is adopting them too? Ross Anderson and Steven Murdoch, June 2014
  • An Integrated Approach to Safety and Security Based on System Theory: Applying a more powerful new safety methodology to security risks, Nancy Leveson and William Young, February 2014
  • Controlling for Cybersecurity Risks of Medical Device Software: Medical device hacking is a red herring. But the flaws are real, Kevin Fu and James Blum, October 2013
  • Learning from the Past to Face the Risks of Today: Achieving high-quality safety-critical software requires much more than just rigorous development processes, Nancy Leveson, June 2013
  • More Sight on Foresight: Reflecting on elections, natural disasters, and the future, Peter G. Neumann, February 2013
  • The Foresight Saga, Redux: Short-term thinking is the enemy of the long-term future, Peter G. Neumann, October 2012
  • The Cybersecurity Risk: Increased attention to cybersecurity has not resulted in improved cybersecurity, Simson Garfinkel, June 2012
  • Yet Another Technology Cusp: Confusion, Vendor Wars, and Opportunities, Don Norman, February 2012
  • Modernizing the Danish Democratic Process, Carsten Shürmann, October 2011
  • The Risks of Stopping Too Soon, David L. Parnas, June 2011 NOTE: This link does not include the ``primitive buzz-diagram sighted in the wild -- the meaning of which eluded capture.''
  • The Growing Harm of Not Teaching Malware, George Ledin, Jr., February 2011, pp. 32--34
  • Risks of Undisciplined Development, David L. Parnas, October 2010, pp. 25--27
  • Privacy by Design: Moving from Art to Practice, Stuart S. Shapiro, June 2010
  • The Need for a National Cybersecurity Research and Development Agenda, Douglas Maughan, February 2010
  • Reflections on Conficker: An insider's view of the analysis and implications of the Conficker conundrum, Phillip Porras, October 2009
  • Reducing Risks of Implantable Medical Devices: A Prescription to Improve Security and Privacy of Pervasive Health Care, Kevin Fu, June 2009
  • U.S. Election After-Math, Peter G. Neumann, February 2009
  • Risks of Neglecting Infrastructure, Jim Horning and Peter G. Neumann, June 2008
  • The Physical World and the Real World, Steven M. Bellovin, May 2008
  • A Current Affair, Lauren Weinstein, April 2008
  • Wireless Sensor Networks and the Risks of Vigilance, Xiaoming Lu and George Ledin Jr, March 2008
  • Software Transparency and Purity, Pascal Meunier, February 2008
  • The Psychology of Risks, Dr. Leonard S. Zegans, January 2008
  • Internal Surveillance, External Risks, Steven M. Bellovin, Matt Blaze, Whitfield Diffie, Susan Landau, Jennifer Rexford, Peter G. Neumann, December 2007
  • Risks of E-Voting, Matt Bishop and David Wagner, November 2007
  • Toward a Safer and More Secure Cyberspace, Herbert S. Lin, Alfred Z. Spector, Peter G. Neumann, Seymour E. Goodman, October 2007
  • E-migrating Risks? Peter G. Neumann, September 2007
  • Which is Riskier: OS Diversity or OS Monopoly? David Lorge Parnas, August 2007
  • Disasters Evermore?, Charles Perrow, July 2007
  • Risks are Your Responsibility, Peter A. Freeman, June 2007
  • The Psychology of Security, Bruce Schneier, May 2007
  • Risks of Virtual Professionalism, Jim Horning, April 2007
  • Risks of Risk-Based Security, Donn B. Parker, March 2007
  • Widespread Network Failures, Peter G. Neumann, February 2007
  • Ma Bell's Revenge: The Battle for Network Neutrality, Lauren Weinstein, January 2007
  • Liability Risks with Reusing Third-Party Software, William Hasselbring, Matthias Rohr, Jürgen Taeger, and Daniel Winteler, December 2006
  • COTS and Other Electronic Voting Backdoors, Rebecca Mercuri, Vincent J. Lipsio, and Beth Feehan, November 2006
  • Virtual Machines, Virtual Security, Steven Bellovin, October 2006
  • The Foresight Saga, Peter G. Neumann, September 2006
  • Risks of Online Storage, Deirdre K. Mulligan, Ari Schwartz, and Indrani Mondal, August 2006
  • Risks Relating to System Compositions, Peter G. Neumann, July 2006
  • EHRs: Electronic Health Record or Exceptional Hidden Risks, Robert Charette, June 2006
  • RISKS of RFID, Peter G. Neumann and Lauren Weinstein, May 2006
  • Fake ID: Batteries Not Included, Lauren Weinstein, April 2006
  • Real ID, Real Trouble?, Marc Rotenberg, March 2006
  • Trustworthy Systems Revisited, Peter G. Neumann, February 2006
  • Software and Higher Education, John C. Knight and Nancy G. Leveson, January 2006
  • Wikipedia Risks, Peter Denning, Jim Horning, David Parnas, and Lauren Weinstein, December 2005
  • The Real National-Security Needs for VoIP, Steven Bellovin, Matt Blaze, and Susan Landau, November 2005
  • The Best-Laid Plans: A Cautionary Tale for Developers, Lauren Weinstein, October 2005
  • Risks of Technology-Oblivious Policy, Barbara Simons and Jim Horning, September 2005
  • Disability-Related Risks, Peter G. Neumann and Michael D. Byrne, August 2005
  • DRM and Public Policy, Ed Felten, July 2005
  • What Lessons Are We Teaching? Susan Landau, June 2005
  • Risks of Third-Party Data, Bruce Schneier, May 2005
  • Two-Factor Authentication: Too Little, Too Late, Bruce Schneier, April 2005
  • Anticipating Disasters, Peter G. Neumann, March 2005
  • Responsibilities of Technologists, Peter G. Neumann, February 2005
  • Not Teaching Viruses and Worms Is Harmful, George Ledin Jr, January 2005
  • Spamming, Phishing, Authentication, and Privacy, Steve Bellovin, December 2004
  • Evaluation of Voting Systems, Poorvi L. Vora, Benjamin Adida, Ren Bucholz, David Chaum, David L. Dill, David Jefferson, Douglas W. Jones, William Lattin, Aviel D. Rubin, Michael I. Shamos, and Moti Yung, November 2004
  • The Non-Security of Secrecy, Bruce Schneier, October 2004
  • The Big Picture, Peter G. Neumann, September 2004
  • Close Exposures of the Digital Kind, Lauren Weinstein, August 2004
  • Insider Risks in Elections, Paul Kocher and Bruce Schneier, July 2004
  • Optimistic Optimization, Peter G. Neumann, June 2004,
  • Artificial Stupidity, Peter J. and Dorothy E. Denning, May 2004
  • Coincidental Risks, Jim Horning, April 2004
  • Risks of Monoculture, Mark Stamp, March 2004
  • Outsourced and Out of Control, Lauren Weinstein, February 2004
  • Believing in Myths, Marcus J. Ranum, January 2004
  • The Devil You Know, Lauren Weinstein, December 2003
  • Security by Insecurity, Rebecca T. Mercuri and Peter G. Neumann, November 2003
  • Information System Security Redux, Peter G. Neumann, October 2003
  • Risks in Trusting Untrustworthiness, Peter G. Neumann, September 2003
  • Spam Wars, Lauren Weinstein, August 2003
  • How Secure Is Secure Web Browsing?, Albert Levi, July 2003
  • Reflections on Trusting Trust Revisited, Diomidis Spinellis, June 2003
  • E-Epistemology and Misinformation, Peter G. Neumann, May 2003
  • On Sapphire and Type-Safe Languages, Andrew Wright, April 2003
  • Risks of Total Surveillance, Barbara Simons and Eugene H. Spafford, March 2003
  • Gambling on System Accountability, Peter G. Neumann, February 2003
  • The Mindset of Dependability, Michael Lesk, January 2003
  • Why Security Standards Sometimes Fail, Avishai Wool, December 2002
  • Florida 2002: Sluggish Systems, Vanishing Votes, Rebecca Mercuri, November 2002
  • Secure Systems Conundrum, Fred B. Schneider, October 2002
  • Risks of Digital Rights Management, Mark Stamp, September 2002
  • Risks in Features vs. Assurance, Tolga Acar and John R. Michener, August 2002
  • Risks: Beyond the Computer Industry, Donald A. Norman, July 2002
  • Free Speech Online and Offline, Ross Anderson, June 2002
  • Risks of Inaction, Lauren Weinstein, May 2002
  • Digital Evidence, David WJ Stringer-Calvert, Apr 2002
  • Risks of Linear Thinking, P.Denning/Horning, Mar 2002
  • The Homograph Attack,Gabrilovich/Gontmakher, Feb 2002
  • Uncommon Criteria, Rebecca Mercuri, Jan 2002
  • Risks of National Identity Cards, Neumann/Weinstein, Dec 2001
  • Risks of Panic, Weinstein/Neumann, Nov 2001
  • The Perils of Port 80, Somogyi/Schneier, Oct 2001
  • Web Cookies: Not Just a Privacy Risk, Sit/Fu, Sep 2001
  • Risks in E-mail Security, Levi/Koc, Aug 2001
  • Learning from Experience, Horning, Jul 2001
  • PKI: A Question of Trust and Value, Forno/Feinbloom, Jun 2001
  • Be Seeing You!, Weinstein, May 2001
  • Cyber Underwriters Lab?, Schneier, Apr 2001
  • Computers: Boon or Bane?, Neumann/Parnas, Mar 2001
  • What To Know About Risks, Neumann, Feb 2001
  • System Integrity Revisited, Mercuri/Neumann, Jan 2001
  • Semantic Network Attacks, Schneier, Dec 2000
  • Voting Automation (Early and Often?), Mercuri, Nov 2000
  • Tapping On My Network Door, Blaze/Bellovin, Oct 2000
  • Missile Defense, Neumann, Sep 2000
  • Shrink-Wrapping Our Rights, Simons, Aug 2000
  • Risks in Retrospect, Neumann, Jul 2000
  • Risks of Internet Voting, Weinstein, Jun 2000
  • Internet Risks, Weinstein/Neumann, May 2000
  • Denial-of-Service Attacks, Neumann, Apr 2000
  • A Tale of Two Thousands, Neumann, Mar 2000
  • Risks of PKI: Electronic Commerce, Ellison/Schneier, Feb 2000
  • Risks of PKI: Secure E-Mail, Ellison/Schneier, Jan 2000
  • Risks of Insiders, Neumann, Dec 1999
  • Risks of Content Filtering, Neumann/Weinstein, Nov 1999
  • Continuation of menu: click here for earlier columns.
  • ========================================================

    Inside Risks 162, CACM 46, 12, December 2003

    The Devil You Know

    Lauren Weinstein

    Question: What's worse than buggy software? Answer: Patches and upgrades that make things even worse. This is a dilemma critical to many applications. How should we cope with the untold millions of computers that are constantly subjected to penetrations, viruses, worms, and other nasties that exploit a steady stream of security weaknesses and flaws? Is finessing, coercing, or even forcing users to install updates a solution -- or just an invitation to further aggravation and potential disasters?

    The underlying problem is obvious. Much commercial software is a mess on the inside. Get past the flashy graphics and the fancy user interfaces, and you frequently descend into a nightmarish realm of twisted spaghetti-like code that might better belong in a Salvador Dali painting. One recurring type of software security bug -- buffer overflows -- dates back to the dawn of computing, but only recently are we seeing some serious attempts to limit this vulnerability systemically.

    Meanwhile, the 800-pound gorilla of PC software, Microsoft, sends forth a stream of patches intending to correct what they themselves designate as ``critical'' security flaws in their systems, applications, and even their own previous patches. Microsoft certainly isn't alone when it comes to software flaws, but as the massively dominant desktop system vendor, their software and support decisions tend to have much more influence on most consumers, businesses, and other organizations than those of other firms.

    Microsoft has expressed continuing concerns about user behavior. They seem to say, in essence, ``If we could just find some way to get users to install each and every patch forever, the bugs in our software wouldn't really matter so much.'' This seems somewhat akin to a vampire, after having bitten your throat and transformed you into one of the living dead, pointing out that vampirism really wasn't so bad as long as you got plenty of blood every night and stayed out of the sun.

    Many computer users pay little if any attention to the issues of security bugs. They take the unfortunate but understandable view that if something seems to be working adequately, don't try to fix it. In the security realm, this can indeed be a very dangerous attitude.

    On the other hand, many expert computer users (particularly those using Microsoft products) don't ignore patches -- they're simply terrified of them. Too often, installing seemingly innocuous ``fixes'' into working systems results in instability, crashes, or even total unusability. Interactions between patches and other software, particularly already-installed third-party packages, can result in widespread disruption to both application and system software. And often there's been no going back without total system restores. For example, Microsoft patches have often been incapable of being effectively removed in case of problems. Microsoft has now announced the move to (more organized) monthly aggregated patches -- but has already had to issue additional interim patches to patch their monthly patches!

    For a time, it was reported that Microsoft was considering the possibility of forcing virtually all users of their systems to accept Microsoft's Internet-delivered updates. More recently, they've been talking about changing the defaults for their ``home user'' systems to automatically accept Microsoft-provided ``critical'' Internet-delivered patches, unless specifically instructed otherwise by users. Not only is it unclear how to accurately delineate this ``home users'' category, but there may be in such a segregation an ominous attitude -- that it's somehow less serious to screw-up home users' computers than those of businesses and other more well-heeled customers. This would be an unacceptable outcome.

    Widely deployed automatic updating systems for PCs could carry with them another very real and serious risk -- the possibility of hackers cracking the Internet-connected update mechanisms, either at the user systems themselves or at central servers, then using them as convenient portals for their own nefarious payloads. Weaknesses in autonomous updating environments (and we know from experience that there almost certainly will be weaknesses) could provide yet another endless series of field days for worms, viruses, and other software nightmares.

    Users (and/or system administrators, as appropriate) have the need and right to fully control their own computers. No particular class of users should be subjected to defaults considered too risky for another group, nor should we need to risk having our operational systems sidelined by possibly unstable vendor patches that may do more damage than the original bugs. A plethora of patches will never be a substitute for true quality software.

    Lauren Weinstein (lauren@pfir.org) is co-founder of People For Internet Responsibility http://www.pfir.org. He moderates the Privacy Forum http://www.vortex.com/privacy.

    ========================================================

    Inside Risks 161, CACM 46, 11, November 2003

    Security by Insecurity

    Rebecca T. Mercuri and Peter G. Neumann

    The belief that code secrecy can make a system more secure is commonly known as security by obscurity. Certainly, vendors have the right to use Trade Secret protection for their products in order to extend ownership beyond the terms afforded under Copyright and Patent law. But some software systems must satisfy critical requirements under intensive challenges, and thus must be trustworthy. The following scenarios illustrate the limitations of the myth of security by obscurity.

    The Ostrich. Metaphorically, many people think (falsely) that ostriches put their heads in the sand in the belief that they are invisible. Some designers think that by restricting access to their system code, exploitable vulnerabilities will not be exposed. The fallacy in this line of reasoning was evident in Matt Blaze's 1994 discovery of a flaw in the Escrowed Encryption Standard (Clipper) that could be used to circumvent law-enforcement monitoring [http://www.risks.org, risks-16.11 and 16.12, June 1994]. Surprisingly, Blaze's method allowed for even easier access to secure communication through the proliferation of Clipper products than was heretofore possible, without access to any keys, backdoors, or weaknesses in the encryption algorithm. (Hiding cryptographic keys is of course necessarily a form of security by obscurity.)

    The Emperor Has No Clothes. A fabled trusted entourage agrees with a foolish assertion because each observer fears retribution. It takes a child with no reason to kowtow to authorities to reveal the truth. Software is not like Coca-Cola(R), where for decades a handful of key employees have been trusted with a secret recipe and production process. Many computer systems are constructed in environments where a host of developers, debuggers, integrators, evaluators, and end users (with shared or possibly conflicting interests) require access to the proprietary product. Each of these individuals or agencies (collectively and individually) may hold the ``keys to the kingdom'', not only because they have knowledge of some or all of the secret code, but because they may be aware of limitations and constrained from revealing or sharing this information. Also, organizational culture may discourage whistle-blowing, even when dire consequences are possible. This happened in both Space Shuttle disasters, where the O-ring and debris damage problems were known within the NASA community before the fateful missions.

    I've Got a Secret. As Benjamin Franklin observed, ``three may keep a secret if two of them are dead.'' The ease with which digital files can be transmitted (willingly or inadvertently) compounds the software secrecy problem. Earlier this year, a number of program files relating to a proprietary voting system were discovered on a subcontractor's unsecured FTP site. Diebold (the vendor) argued that the software subsequently reviewed at Johns Hopkins [Kohno, Stubblefield, Rubin, and Wallach, Analysis of an Electronic Voting System, July 23, 2003, http://www.avirubin.com/vote.pdf] ``represents a very small percentage of the entire code needed to conduct an election'' [Diebold Election Systems exposes flaws found in recent voting system report, July 29, 2003, http://www.diebold.com/election.htm]. Of course, this does not excuse the lax security that allowed the code to be downloaded from the Internet in the first place.

    The Shell Game. Here, a trickster uses slight-of-hand to keep an object from view. In the above voting system example, the Johns Hopkins analysis team found numerous security flaws in the code, one of which involved the use of a vulnerable DES encryption protocol, along with a hardcoded key in the source file [http://www.avirubin.com/vote.pdf]. Diebold defended its system in a rebuttal report, claiming that the examined software ``was an older version'' that had ``passed rigorous functional tests and reviews'' [Checks and balances in elections equipment and procedures prevent alleged fraud scenarios, July 30, 2003, http://www.diebold.com/checksandbalances.pdf]. However, many election equipment tests are performed in secret, thus making it impossible to ascertain the level of rigor applied. One such examiner, Douglas Jones, had served on Iowa's Board of Examiners and, based on his analysis of a federal test report, had asserted to Global (Diebold's predecessor) in 1997 and the House Science Committee in 2001 that inappropriate key management was being used with this firm's products [Douglas W. Jones, The Case Against the Diebold AccuVote TS, July 28, 2003, http://www.cs.uiowa.edu/~jones/voting/dieboldacm.html]. It will be difficult to know whether such problems have been adequately resolved, because Diebold plans to continue its closed-source practices.

    As noted here in October 2003, open source by itself does not provide a solution to these problems. Risk assessment, examination, and testing appropriate to deployment settings are fundamental to security assurance -- which should not be hampered by vendors' refusals to disclose critical components where a need to know can be demonstrated. Furthermore, customers should not cling to the false hopes of security by obscurity.

    Rebecca Mercuri (mercuri@acm.org), a Research Fellow at Harvard University's Kennedy School of Government, authors CACM's quarterly Security Watch column. See her Web site at http://www.notablesoftware.com. Peter Neumann moderates the ACM Risks Forum. See his Web site at his Web site at http://www.CSL.sri.com/neumann .

    ========================================================

    Inside Risks 160, CACM 46, 10, October 2003

    Information System Security Redux

    Peter G. Neumann

    In September 2003 we discussed risks in trusting entities that might not actually be trustworthy. And yet, people use flawed systems that may cause more security and reliability problems than they solve.

    There are various reasons why untrustworthy mass-market software might be used so extensively, even if the source code is proprietary and the vendor can arbitrarily download questionable software changes without user intervention. Sometimes this is a path of least resistance, with few perceived alternatives. Or it has the appearance of saving money in the short term. In some cases it is mandated organizationally -- ostensibly to simplify procurement, administration, and maintenance, or because of a desire to remain within the monolithic mainstream. Often security, reliability, and the risks of networking are considered less important, or else There is a misplaced trust that the free market will provide a cure. But the simplest answer may be ``because it's there.'' However, irrespective of any reasons why people might want to use flawed software, in certain cases it might be wiser not to use it -- especially where the risks are considerable.

    In my fourth testimony (August 2001) in five years for committees of the U.S. House of Representatives, I made the following statement -- amplifying similar statements made in previous years:

    ``Although there have been advances in the research community on information security, trustworthiness, and dependability, the overall situation in practice appears to continually be getting worse, relative to the increasing threats and risks -- for a variety of reasons. The information infrastructure is still fundamentally riddled with security vulnerabilities, affecting end-user systems, routers, servers, and communications; new software is typically flawed, and many old flaws still persist [as RISKS readers observe repeatedly]; worse yet, patches for residual flaws often introduce new vulnerabilities. There is much greater dependence on the Internet, for Governmental use as well as private and corporate use. Many more systems are being attached to the Internet all over the world, with ever increasing numbers of users -- some of whom have decidedly ulterior motives. Because so many systems are so easily interconnectable, the opportunities for exploiting vulnerabilities and the ubiquity of the sources of threats are also increased. Furthermore, even supposedly stand-alone systems are often vulnerable. Consequently, the risks are increasing faster than the amelioration of those risks.''

    The situation seems still worse in 2003, especially in mass-market software. The continuing flurry of viruses, worms, and system crashes raises the level of inconvenience to users and institutions. The incessant flow of identified vulnerability reports and the further existence of flaws that are not publicly known suggest serious problems. The continual needs for installing thousands of patches in mass-market software (and the iterative problems they sometimes cause) suggest that we are not converging. Putting the blame on inadequate system administration seems fatuous. The August 2003 exploitations of Microsoft problems (the Blaster worm and the SoBig virus) are further examples of endemic problems in vulnerable systems that can be exploited. Unfortunately, too many people seem to be oblivious to the underlying security problems.

    Suggestions that we need to raise the bar may be countered with the argument that past attacks have not really been serious, and we have never had any pervasive disasters of information system security, so why should we worry? However, it is precisely because past events have been relatively benign (compared with what they could done) that there should be greater concern. Furthermore, a general overemphasis on short-term costs allows long-term concerns to be ignored.

    The Free Software / Open Source movements have been touted as possible alternatives to the inflexibilities of closed-source proprietary code. Indeed, GNU-Linux/BSD Unix variants are gaining considerable credibility, and are seemingly less susceptible to malware attacks. However, by itself, availability of source code is not a panacea, and sound software engineering is still essential. Even if an entire system has been subjected to extremely rigorous open evaluation and stringent operational controls, that may not be enough to ensure adequate behavior.

    Many users have grown accustomed to flaky software, perhaps because they do not have to meet critical requirements (as in nuclear power control, power distribution, and flight and air-traffic control) and suffer no liability for disasters. Perhaps it is time to follow the adage of ``Just Say No'' to bad software that is seriously unsecurable, and to demand that software development be dramatically improved.

    Neumann moderates the ACM Risks Forum (www.risks.org). See www.CSL.sri.com/neumann) for extensive background for this topic -- including Congressional testimonies.

    ========================================================

    Inside Risks 159, CACM 46, 9, September 2003

    Risks in Trusting Untrustworthiness

    Peter G. Neumann

    The Internet provides ample opportunity for proving the age-old truism, ``There's a sucker born every minute.'' Carnival-style swindles and other confidence games once limited to in-person encounters are now proliferating electronically, world-wide, at low cost and effort. A blatantly obvious example is the so-called Nigerian-style scam that requests use of one's bank account to move money; hoping for a proffered generous commission, the the suckers are then separated from their assets. It is astounding that people still fall for such obvious frauds.

    There are countless other kinds of scams, stings, and misrepresentations. Spam e-mail offering bogus goods and services opens up new avenues for fraud and identity theft. On-line activities are emerging with glaring opportunities for swindles, manipulations, and assorted malfeasance, such as on-line auctions (with various irregularities include nondelivery and secondary criminality), Internet gambling (especially off-shore), and fraudulent Web sites (for example, with deceptive URLs creating the appearance of legitimacy). We have previously noted here that electronic voting systems present a significant risk --- especially for use over the Internet. With independent accountability seriously lacking today, e-voting can be likened to using an off-shore gambling site not subject to any regulation. Any of these and other situations could result in inordinate risks, such as financial ruin, blackmail, compromised democracy, or even loss of life. But it is perhaps less astounding that people fall for such schemes, particularly when the technology superficially appears genuine.

    We tend to trust certain third-party relationships, with banks, telephone companies, airlines, and other service providers whose employees have in some way earned our trust, collectively or individually. But what about untrustworthy third parties? Some computer-based applications rely critically on the putative integrity and noncompromisibility of automated trusted third parties, with little if any easily demonstrated human accountability. Examples include digital-certificate authorities, cryptographic servers, surveillance facilities, sensitive databases for law enforcement, credit-information bureaus, and the like. With increasingly appealing short-term cost incentives for pervasive use of outsourcing, the need for trustworthiness of third-party institutions becomes even more important. However, security, privacy, and accountability often seem to be ignored in efforts to save money.

    Is placing trust in off-shore enterprises inherently riskier than using domestic services? Not necessarily. Corruption and inattention to detail are world-wide problems. The deciding factor here is perhaps the extent to which comprehensive oversight can be maintained.

    Is domestic legislation enough? Of course not. Any legislation should not be overly simplistic; for example, it should avoid seeking solely technological fixes or purely legislative solutions to deeper problems. Besides, serious complexities arise from the fact that such problems are international in scope and demand international cooperation.

    Is there a role for liability (for flagrant behavior) and differential insurance rates --- for example, based on how well a purveyor is living up to what is expected of it? Yes. Such measures have significant potential, although they will be strongly resisted in many quarters.

    So, how can we provide some meaningful assurance that critical entities (direct parties, third parties, or otherwise) are sufficiently trustworthy? Ideally, institutions providing, controlling, managing, and monitoring potentially riskful operations should be decoupled from other operations, eschewing conflicts of interest, and subjected to rigorous independent oversight. Enronitis and collusion must be avoided, even if it means that the costs are greater. Furthermore, the people involved need altruism, sufficient foresight to anticipate the risks, and a commitment to effectively combat those risks. At the very least, their backgrounds should be free of criminal convictions and other activities that would cast serious suspicions on their trustworthiness. In addition, we need legislators able to see beyond the simplistic and palliative, to approaches that address the real problems. Above all, we desperately need a populace that is more aware of the risks and the needs outlined above.

    This column should not be news to most of you. Overall, there are many risks that must be addressed. The old Latin expression ``Caveat emptor'' (Let the buyer beware) seems quite timely today. Ultimately, it all comes down to ``Sed quis custodiet ipsos custodes?'' (But who is watching the watchers?)

    Peter Neumann moderates the ACM Risks Forum (www.risks.org).

    ========================================================

    Inside Risks 158, CACM 46, 8, August 2003

    Spam Wars

    Lauren Weinstein

    In the June 1997 edition of this column, Peter G. Neumann and I asked if you were being ``flooded'' with spam. At the time, spam was already annoying, but as it turns out the real torrent hadn't even begun! Fast-forward to 2003, and spam now threatens the Internet's stability and reliability -- not only of e-mail systems, but potentially of the network infrastructure itself. Spam is quite probably a greater actual threat to the stability of the Internet today than the theoretical risk of Net-based terrorism.

    Estimates are that typical Internet users' inbound e-mail may now be about 50% spam. Highly visible e-mail addresses are pounded even harder. A couple of years ago spam likely accounted for something less than 10% of overall e-mail received. The trend-line is alarming to say the least.

    We will drown in spam unless solutions can be found. Organizations ranging from the Federal Trade Commission (which belatedly wants more anti-spam powers) to the American Marketing Association (who worry that their members' e-mail marketing messages are ``misconstrued'' as spam), as well as other public and private organizations, tend to propose generally simplistic spam-cure patent medicines.

    Yet, most of the hodgepodge of ``quick fix'' spam control ideas seem unlikely to significantly stem spam's flow, and in many cases may do more harm than good.

    Legislation to explicitly outlaw spamming is certainly necessary, but tends to be of limited usefulness except against big and obvious spammers, an issue further complicated by spam's global and easily obfuscated reach. Poorly written anti-spam laws might actually have the perverse effect of legitimizing gigantic amounts of ``unsolicited bulk e-mail'' -- that is, spam!

    The crooks using spam to hawk fake bodily enhancement products, illegal cable TV boxes, and Nigerian free-money frauds (to name but a few) are unlikely to be concerned about legal strictures against spam.

    Common spam filtering programs are usually of the ``dirt under the rug'' variety. They categorize and/or delete spam messages after they've been received by systems, but by then much of the bandwidth and processing costs of the spam have already been wasted. The false-positive rate with such programs is also a major problem -- important nonspam e-mail is frequently identified as spam and relegated unseen to the bit-bucket.

    Other ad hoc technical measures against spam can have negative consequences of their own. ISPs increasingly engage in severe restrictions (such as preventing subscribers from running their own secure mail servers) that hobble users, don't significantly dent the overall spam flow, and can also inflict collateral damage to innocent sites.

    Technical anti-spam fads such as ``challenge-response'' threaten to tangle users' e-mail and legitimate Internet mailing lists into knots, while actually increasing the flow of spam-related traffic due to bounced and misdirected spam challenges.

    Today's Internet e-mail systems (basically defined more than two decades ago) are inadequate to deal with the e-mail environment we now face, in terms of spam and other critical problems such as security, reliability, and authentication. It's time for fundamental change, rather than Band-Aid, piecemeal-reactive approaches.

    There are various possible evolutionary routes towards practical, sustainable, and significantly fundamental structural changes to Internet e-mail that could be implemented in phased approaches to avoid unreasonable disruption of existing e-mail systems during a transition period.

    Peter and I have proposed one such path for discussion, which we've dubbed ``Tripoli'' (for Triple-E, Enhanced E-Mail Environment). Tripoli focuses on vesting e-mail control decisions with users (especially the recipients of e-mail), rather than ISPs or governments.

    Tripoli postulates a token-based authentication system to provide for flexible spam controls, along with intrinsic encryption and other security functions, while still providing the option of permission-based ``anonymous'' e-mail communications. Importantly, we believe that the Tripoli framework deals appropriately with the free-speech and other concerns that we expressed in our earlier column regarding anti-spam policy issues. We hope Tripoli will provide a useful and continuing contribution to the ongoing debates over e-mail futures. (Please see http://www.pfir.org/tripoli-announce for details.)

    Unless we get started now on the onerous but necessary task of fundamentally redesigning Internet e-mail in a sustainable manner, we will find that our electronic mail nightmare has just begun.

    Lauren Weinstein (lauren@pfir.org) is co-founder of People For Internet Responsibility (www.pfir.org). He is moderator of the Privacy Forum (http://www.vortex.com).

    ========================================================

    Inside Risks 157, CACM 46, 7, July 2003

    How Secure Is Secure Web Browsing?

    Albert Levi

    Security is of particular importance when sensitive information is sent through the WWW. Web users must rely on the security of the browser's Secure Socket Layer (SSL) protocol. Although the closed-padlock icon in a browser window depicts a secure connection, it does not imply a totally risk-free secure connection. Whenever the padlock is snapped or a security related message pops up, you should be alerted and scrutinize the security of that connection.

    During the handshake of a secure connection, the server sends a public-key certificate to identify itself. You assume you have a secure connection to the entity identified in the certificate, but that entity may not be who you think it is. So, what is the critical issue in verifying a server certificate? It is in the root CA's (certification authority) self-signed certificate that the verification starts. We trust root CAs (assuming that they don't issue certificates to copycat server) because our browser developer does. An initial list of root CA certificates comes with browsers. Depending on their trust in browser developers, users may assume that all root CAs that come with browsers are robust. However, authenticity is an important concern for other root CA certificates installed after the browser. An attacker can introduce bogus certificates for installation automated via a VB script. The client sees only a final approval screen that may easily be ignored by pressing the ``yes'' button.

    Let's consider the following possible scenario. Suppose you've connected to your bank, www.xyzbank.com. Using network spoofing techniques, an attacker reroutes this traffic to its counterfeit site, and imitates a well-known root CA as the issuer for a fake certificate created for xyzbank. The attacker creates a second imitation certificate: a self-signed root CA certificate for the same well-known root CA. When these imitation certificates are used for a secure connection, you, as a client, will see a warning saying that the root CA is not to be trusted. Taking a closer look at the certificate details is of no help, even harmful, because your favorite root CA seems to be the issuer. You might easily prefer to continue and maybe install the imitation certificate assuming that there is a bug in your system. Because the victim will see the well-known root CA's name on the final approval screen, he/she would probably buy into the con.

    The only authentication guarantee provided by a closed padlock is that the URL in the certificate is the same as the one in the address bar of your browser. A closed padlock does not indicate the server's commercial identity; browsers tell nothing about the certificate it just used to snap the padlock. You have to examine the certificate details to ascertain commercial identity. For example, when you connect to www.delta.com, you can't be certain you're connected to Delta Airlines just by the closed padlock; you have to scan the certificate details by clicking on the padlock. If www.delta.com was, say, Delta Foods, again, you would have seen a closed padlock even if the Web page looks like the airline's.

    Certificate examination highlights the dilemma of server identification: the certificate contains the formal name and URL, but the average user needs to see something easily recognizable from previous experience such as the brand name, logo, or current slogan.

    Furthermore, to take advantage of URL control, you always have to be aware of the URL you're browsing by checking the address bar. Some secure applications pop up browser windows with the address bars and toolbars removed, so as to restrict the customers to just the buttons provided. In other cases, address bars exist, but due to the copious information in the address bar, the address bar is scrolled left and the URL part is not visible without scrolling right.

    A closed/open padlock indicates whether the just completed transfer was secured or not; it doesn't give any security information about the next connection, which might be password transfer by clicking on the ``sign-in'' button. Therefore, whether you enter your password in a secured or an unsecured Web page, that password may go unencrypted. In either case, you should examine the source code of the current Web page to see if the next connection is secured or not.

    Secure Web browsing needs a careful and questioning user. Checking the certificate details and controlling the root certificate store definitely helps. Root certificate installations should be avoided. Also, keep your eyes on the address bar. In general, don't bury your head in the sand just by trusting a padlock icon.

    Albert Levi (levi@sabanciuniv.edu) is an assistant professor of computer science and engineering at Sabanci University, Istanbul, Turkey.

    ========================================================

    Inside Risks 156, CACM 46, 6, June 2003

    Reflections on Trusting Trust Revisited

    Diomidis Spinellis

    Security is often described as a weak-link phenomenon. Ken Thompson in his 1983 Turing Award Lecture [1] described how a compiler could be modified to plant a Trojan horse into the system's login authentication program so that it would accept a known password. In addition, the C compiler could be altered to propagate this change when it was recompiled from its (unmodified) source code. The system Thompson described was seriously compromised and could never be trusted: even a recompilation from clean source code would yield a Trojaned compiler and login program.

    Twenty years later we find efforts such as the Trusted Computing Group (the retooled Trusted Computing Platform Alliance, a 190-company industry work group), Intel's LaGrande, and Microsoft's NGSCB (Next Generation Secure Computing Base, previously known as Palladium) aiming to create secure systems from the ground up [2]. ``Trusted Computing'' platforms include specialized hardware or a processor that can monitor a system's boot process ensuring that the computer is based on appropriately certified hardware and software. After verifying the machine's hardware state and firmware, the platform can check that the operating system is certified, then load it and transfer control to it. The operating system (for example Microsoft's NGSCB) can similarly verify that only secure untampered applications are loaded and executed---no more doctored C compilers or unauthorized descramblers. Thus, a TC platform can be used to rigorously enforce third-party-mandated security policies such as those needed for digital rights management (DRM) and mandatory access control [3].

    Given our nearly unbroken track record of failed security technologies, we should view claims regarding a system's trustworthiness with skepticism. Recently a group managed to run Linux on a Microsoft XBox---without any hardware modifications [4]. The XBox, in common with many other game consoles, mobile phones, and even printer cartridges, can be thought as an instance of a special purpose TC platform. Although based on commodity hardware, the XBox is designed in a way that allows only certified applications (games) to run, thus protecting the licensing revenue stream of its vendor. Earlier attempts to run unauthorized software (such as the Linux kernel) on it required hardware modifications, a prospect that will not be realistic once TC features are part of the CPU (as might be the case with Intel's LaGrande design). The recent attack modifies the saved data of a particular game in a way that renders the trusted game into an untrusted agent that can then be used to boot Linux.

    The two attacks, set apart by twenty years, share an interesting parallel structure. Thompson showed us that one can not trust an application's security policy by examining its source code if the platform's compiler (and presumably also its execution environment) were not trusted. The recent XBox attack demonstrated that one can not trust a platform's security policy if the applications running on it can not be trusted. The moral of the XBox attack is that implementing on a TC platform a robust DRM, or mandatory access control, or even a more sinister security policy involving outright censorship will not be easy. It is not enough to certify the hardware and have a secure operating system; even a single carelessly written but certified application can be enough to undermine a system's security policy. As an example, a media player could be tricked into saving encrypted content in an unprotected format by exploiting a buffer overflow in its (unrelated) GUI customization (so-called skin) code. Capability machines built in the 1970s used strong typing and a finer granularity object classification and access control schema that would prevent such an attack. However, as Needham and Wilkes concluded from their work on the CAP system, the operating system was too complex and therefore hard to trust and maintain [5]. Those of us who distrust the centralized control over our data and programs that TC platforms and operating systems may enforce can rest assured that the war for total control of computing devices can not be won.

    1. Ken L. Thompson. Reflections on trusting trust. Ken L. Thompson. Reflections on trusting trust. Communications of the ACM, 27(8):761-763, August 1984, 27(8):761-763, August 1984.

    2. Steven J. Vaughan-Nichols. How trustworthy is trusted computing? Computer 36(3):18-20, March 2003.

    3. Ross Anderson. Cryptography and Competition Policy---Issues with `Trusted Computing'. Workshop on Economics and Information Security, May 29-30, University of Maryland. Online http://www.cl.cam.ac.uk/ftp/users/rja14/tcpa.pdf. Current April 2003.

    4. Rob Malda Linux Running on Xbox Without Modchip! Online http://slashdot.org/article.pl?sid=03/03/30/1337234. Current April 2003.

    5. Maurice V. Wilkes and Roger M. Needham. The Cambridge CAP Computer and its Operating System. Elsevier, London 1978.

    Diomidis Spinellis is an Assistant Professor in the Department of Management Science and Technology at the Athens University of Economics and Business and author of the book Code Reading (Addison-Wesley, 2003).

    ========================================================

    Inside Risks 155, CACM 46, 5, May 2003

    E-Epistemology and Misinformation

    Peter G. Neumann

    The problems of on-line misinformation seem to be worsening, because of the growth of the Internet and our ever increasing dependence on on-line systems. Information technology is a double-edged sword --- perhaps even moreso than many other technologies. In the hands of enlightened individuals, institutions, and governments, its use can be enormously beneficial. In other hands, it can be detrimental. Unfortunately, the dichotomy is often in the eye of the beholder, perhaps depending on one's objectives (e.g., personal financial gains, corporate profits, global economic well-being, privacy, environmental concerns, etc.).

    Given a collection of on-line information, many people behave as if it is inherently authentic and accurate. This myth applies not only to Web sites, but also to many types of special-purpose databases, such as those found in law enforcement, motor vehicle departments, medicine, insurance, social security, credit information, and homeland security. We have seen many cases in which misinformation (e.g., false flight data, erroneous medical records, undeleted acquittals, or tampered files) has resulted in very serious consequences.

    Although an individual can occasionally observe that personal information about one's self is incorrect, more typically such erroneous information is hidden from the individual in question, possibly in multiple but different inaccurate versions. Overall, it is usually impossible for one to ensure that all such instances are correct. Furthermore, it is much more difficult to determine whether or not on-line information about anything else is authoritative. Worse yet, the volume of questionable information is growing at an extraordinary rate, and attempts to update substantive misinformation often have little effect -- especially with the persistence of incorrect cached versions.

    We rely increasingly on the Internet for many purposes, including education and enlightenment, irrespective of whether the sources are accurate. Oft-repeated overly simplistic sound-bite mantras seem to be popular. Furthermore, some people seem eager to waste time and energy that could be better spent elsewhere -- or to drop out. There is a tendency for entrenched positions to remain fixed. Are we losing our ability to listen openly to other views and engage in constructive thought? Another problem involves the inaccessibility of vital information. We seem to have evolved into a mentality of ``If it is not on the Internet, it does not exist.'' Even though there are many more data bytes available today than ever before, search engines typically find fewer than 5% of the Web pages, almost none of the database-driven dynamic Web pages, and very little of what is in most public libraries. Copyright restrictions and proprietary claims further limit what is available. For example, the ACM digital library is accessible only to those ACM members who pay to subscribe. Furthermore, overzealous filtering blocks many authoritative sources of information. Are our educations and information gathering suffering from a lowest-common-denominator process?

    The propagation of misinformation has long been a problem in conventional print and broadcast media, but represents another problem that is exacerbated by the speed and bandwidth of the Internet. In general, widely held beliefs in supposedly valid information tend to take on lives of their own as urban myths; they tend to be trusted far beyond what is reasonable, even in the presence of well-based demonstrations of their invalidity.

    In the face of such rampant misinformation, the truth can be difficult to accept, partly because it can be so difficult to ascertain, partly because it can seem so starkly inconsistent with popular misinformation, and partly because people want to believe in simple answers. Thus, we are revisiting classical problems that might now be considered as E-Epistemology, involving the nature and fundamentals of on-line knowledge -- especially with reference to its limits and validity. However, there are some possible remedies, such as epistemic educational processes that teach us how to evaluate information objectively. For Web sites, this might entail examining who are the sponsors, what affiliations are implied, where the information comes from, whether multiple seemingly reinforcing items all stem from the same incorrect source, whether purported Web site security and privacy claims are actually justified, and so on.

    Peter Neumann moderates the ACM Risks Forum (risks.org). His Web site contains an archival index to many relevant cases http://www.csl.sri.com/neumann/illustrative.html. Many thanks to David Parnas and Rob Kling for their constructive feedback on this column.

    ========================================================

    Inside Risks 154, CACM 46, 4, April 2003

    On Sapphire and Type-Safe Languages

    Andrew Wright

    Beginning Saturday 25 January 2003 around 12:30am EST, a distributed denial-of-service attack spread rapidly throughout the global Internet. Within 10 minutes, most of the vulnerable hosts on the Internet were infected. By morning, Bank of America customers could not withdraw money from 13,000 ATMs. Continental Airline's website was off-line, forcing manual check-in. Normally heavy Internet trading on the South Korean stock market vanished. Many other websites and Internet services were also rendered inaccessible by the Sapphire (or Slammer) worm responsible for the attack.

    Sapphire is a 376-byte worm that infects Microsoft SQL Server 2000 hosts via the SQL Resolution Service running on UDP port 1434. It attempts to spread itself rapidly to other hosts. The worm does no damage to the infected machine: it does not modify disk files, alter or inspect database contents, or interrupt database execution. It merely probes for other SQL Servers to infect, by generating random IP addresses and sending UDP packets to port 1434 on those addresses. Since most of these IP addresses are not local, the resulting flood of packets saturates the host's connection to the Internet.

    That such a small worm could so effectively disrupt so many servers so rapidly was somewhat surprising. The CodeRed, ILoveYou, and Nimda worms were all much bigger than the single Sapphire packet and took much longer to propagate. That such a simple attack was possible was no surprise at all. Sapphire exploits a buffer overflow vulnerability in SQL Server 2000 for which CERT issued a security advisory and Microsoft issued a patch six months earlier. When the Resolution Server receives a packet of type 04, it uses data from the packet to build a registry key in a fixed-size buffer on the stack. The unpatched code performs no length checks on the registry key it constructs. If the received packet is long enough, the constructed registry key overflows the stack-allocated buffer and overwrites the current function's return address, which follows the buffer in memory. The Sapphire worm consists of a single overly-long packet that causes this return address to be overwritten with the address in the buffer where the worm's code resides. The worm code begins executing when the function returns.

    Buffer overflow bugs of this nature are a common source of security vulnerabilities in programs written in languages like C and C++. They also arise frequently in ordinary program development in such languages, where they cause memory corruption that leads to erratic program behavior, application crashes, and machine crashes. They can be difficult to debug because the resulting program behavior usually cannot be explained abstractly in terms of functions, variables, expressions, and statements, but must be understood at the machine level in terms of addresses and bytes. In turning a buffer overflow bug into a security breach, an adversary exploits this abstraction gap to corrupt memory in a carefully calculated way.

    Some modern programming languages (e.g., Ada, C#, Common Lisp, Eiffel, Java, Modula 3, Scheme, and Standard ML) prevent this failure of abstraction. Such a language is said to be ``type-safe'' because its implementation ensures, via some combination of compile- and run-time checks, that the value a variable takes on or a function is passed always matches the language's notion of the variable's or function's type. When a type violation arising from a programming error is about to occur, such as an access to the 257th element of a 256-element buffer, the language implementation either terminates the program or raises a language-level exception.

    Type safety makes program development and debugging easier by making program behavior more understandable. More importantly in today's networked world, type safety prevents an adversary from turning a type violation into a security breach. While an adversary might be able to provide inputs that trigger a run-time check, memory corruption cannot occur. There is no way the adversary can cause a buffer to overflow and be reinterpreted as a sequence of machine instructions. (For type safety to prevent security breaches, the capability that some compilers provide to turn off run-time checks must not be used.)

    Type safety is not a panacea for security. Other kinds of bugs besides type violations can lead to security problems. Even the termination of an application due to a type violation can result in denial of service. But type-safe languages make it much more difficult for an adversary to turn a type violation into a more serious security breach. Type-safe languages provide an important line of defense in developing applications safe for today's networked world.

    Andrew Wright (akwright@acm.org) is research staff at a large networking company.

    ========================================================

    Inside Risks 153, CACM 46, 3, Mar 2003

    Risks of Total Surveillance

    Barbara Simons and Eugene H. Spafford

    The U.S. Public Policy committee of ACM (USACM) is concerned that the proposed Total Information Awareness (TIA) Program, sponsored by the Defense Advanced Research Projects Agency, will fail to achieve its stated goal of ``countering terrorism through prevention''. Further, we believe that the vast amount of information and misinformation collected by any system resulting from this program is likely to be misused to the detriment of many innocent American citizens.

    Because of serious security, privacy, and personal risks associated with the development of any vast database surveillance system, we recommend a rigorous, independent review of TIA. Such a review should include an examination of the technical feasibility and practical reality of the entire program.

    Security Risks. The state of the art in computer system design is such that that any systems resulting from TIA are unlikely to be able to preserve integrity and keep data out of unauthorized hands, whether they are operated by governmental or commercial organizations. Frequent reports of successful hacker break-ins and insider misuse of supposedly secure systems and the pervasive existence of software flaws constitute evidence that we are unable to make these systems adequately secure, and suggest that the likelihood of a trustworthy database system emerging from this effort is vanishingly small.

    The databases proposed by TIA would also increase the risk of identity theft by providing a wealth of personal information to anyone accessing the databases, including terrorists masquerading as others. Recent compromises involving about 500,000 military-relevant medical files and 30,000 credit histories are harbingers of what may be in store.

    Privacy Risks. The need for oversight and control is especially great when aggregation and analysis of personal information is done without the knowledge or consent of the people being monitored. It is misleading to suggest that ``privacy enhancing technologies'' within TIA can somehow protect people's privacy, because by definition surveillance compromises privacy. Furthermore, the secrecy inherent in TIA implies that citizens could not verify that information about them is accurate and shielded from misuse. Worse yet would be the resulting lack of protection against harassment or blackmail by individuals who have inappropriately obtained access to an individual's information, or by government agencies that misuse their authority.

    Personal Risks. TIA would combine automated data-mining with statistical analysis, thereby resulting in some number of false positives -- risking incorrectly naming someone as a potential terrorist. As the entire population would be subjected to TIA surveillance, even a very small percentage of false positives would result in a large number of law-abiding Americans being mistakenly labeled.

    The existence of TIA would impact the behavior of real terrorists and law-abiding individuals. Real terrorists are likely to go to great lengths to make certain that their behavior is statistically normal; ordinary people are likely to avoid perfectly lawful behavior out of fear of being labeled UnAmerican.

    To summarize, we appreciate that the stated goal of TIA is to fund research on new technologies and algorithms that could be used in a surveillance system in the service of eliminating terrorist acts. However, we are extremely concerned that the program has been initiated (and some projects already funded) apparently without independent oversight and without sufficient thought being given to real constraints -- technical, legal, economic, and ethical -- on project scope, development, field testing, deployment, and use. Consequently, the deployment of TIA, as currently conceived, would create new risks while providing only the appearance of increasing homeland security.

    There are important steps that the government can take now to increase our security without creating a massive surveillance program that has the likelihood of doing more harm than good. Federal, state, and local governments already have information systems in place that could play major roles with highly focused terrorist spotting. However, many of these information systems are only partly functional and/or being ineffectively used. An example is the computer system run by the Federal Bureau of Alcohol, Tobacco and Firearms which, according to The New York Times, was unable to link bullets fired in three sniper shootings in Maryland and Georgia in September 2002. Serious improvements in the use of current operational systems could significantly enhance homeland security without creating the major risks noted here.

    Barbara Simons and Eugene H. Spafford are Co-Chairs of USACM. This article is drawn from a USACM letter to Congress: www.acm.org/usacm/Letters/tia_final.html. The ACM Public Policy Office can be reached at 1-202-478-6312.

    ========================================================

    Inside Risks 152. CACM 46, 2, Feb 2003

    Gambling on System Accountability

    Peter G. Neumann

    Because of rampant security vulnerabilities, ever-present risks of misuse by insiders, and possibilities for penetrations by outsiders, there are many needs for comprehensive computer system accountability -- that is, the ability to know definitively what is transpiring, particularly during and after accidents and intentional misuse. Unfortunately, security typically focuses overly on confidentiality, with integrity, availability, strong authentication, authorization, correctness, and accountability dragging way behind. In this column, we consider the potential importance of the design, implementation, and operation of policies and mechanisms for accountability that themselves resist being compromised -- especially by knowledgeable trusted insiders. We illustrate this by considering the situation surrounding the recent Pick-Six betting scam involving the Breeders' Cup horse race.

    A $3-million Pick-Six payoff over six races ending with the high-stakes Breeders' Cup race seemed rather suspicious, because the Pick-Six winner also picked many consolation bets of five winners. Subsequent investigation showed that an unusual combination bet had been placed by telephone from an off-track betting (OTB) site in Catskill, New York. The results of the first four races had been chosen exactly (including two long-shots of 13-to-1 and 26-to-1), and the bets on the remaining two races covered every possible combination.

    Autotote's software is used by most U.S. off-track horse-race betting sites. Because the Autotote OTB system transmits bets to the central system only after the completion of the fourth race in the Pick-Six cycle, it was concluded that the ``winner'' had placed a combination bet of w,x,y,z,*,* (for an arbitrary choice of horses w, x, y, and z, with a wild-card [*] of multiple bets over all possible horses in the last two races); then, after the results of the first four races were known (let's define them as A, B, C, D), but before the data transfer occurred, someone with access to the OTB system changed w,x,y,z to A,B,C,D. This resulted not only in the Pick-Six winner, but also in multiple consolation winners.

    Accountability? Unfortunately, there is no bet-specific audit trail on telephoned OTB bets, although a spokesman for Autotote had insisted that it was ``absolutely impossible'' to hack into the system! So, you might ask, had anything like this happened previously? Indeed, there had been a previous Pick-Six payoff from the same OTB site (in Catskill, NY), and a similar earlier case in a Pick-Four. Furthermore, all of the participants in these instances were fraternity buddies from their days at Drexel University, and one of them was already under suspicion. It was also determined that they had forged tickets and collected on yet-unclaimed winning bets. The ``someone'' noted above was a former Autotote employee, who has pleaded guilty to one count of conspiracy to commit wire and computer fraud and one count of money laundering. The other two participants have also admitted their guilt.

    In betting systems and financial systems generally, an inherent need exists for rigorous accountability. Many other applications previously discussed in this space also illustrate the criticality of integrity and accountability. For example, fully electronic voting systems are an example of ``self-auditing'' products that, due to their anonymity requirements, require vigilant oversight and independent accountability rather than the almost total lack of assurance that they provide today. (``Trust us,'' the vendors say.) Also, mounting privacy concerns (including the proposed Total Information Awareness effort) are another huge problem area. Unfortunately, although access controls and database accountability might help sometimes to identify the perpetrators of violations, many privacy invasions involve untraceable human actions outside of computer systems.

    Several lessons are evident. In many critical applications, risks of misuse by people with insider knowledge are widely ignored; so are the risks of outsiders who can easily become insiders, because of the lack of adequate internal security. System designs that seriously ignore accountability are particularly at risk, because of the difficulties of detecting and tracing misuse. Where they exist, audit trails must be strongly tamper resistant, or else they are themselves subject to manipulation. Physical traceability, paper trails, and truly independent, unbiased, objective, and honest security audits by experts can also be helpful. Proprietary closed-source software systems is inherently suspect without meaningful accountability. In short, noncompromisible accountability can often be invaluable -- although it presents serious opportunities for invasions of privacy that must also be addressed.

    See Neumann's Web site for background (neumann@csl.sri.com). Also see Rebecca T. Mercuri's article ``On Auditing Audit Trails'' in the January 2003 issue of CACM, pp. 17--20, and her Web site at http://www.notablesoftware.com.

    ========================================================

    Inside Risks 151, CACM 46, 1, Jan 2003

    The Mindset of Dependability

    Michael Lesk

    Computer software is legendary for the time and cost overruns producing it, and for its fragility after it is written. The U.S. government failed trying to procure dependable software for the IRS and the FAA, and the UK government was recently accused of wasting more than a billion pounds on failed or overdue information technology contracts. Perhaps only 25% of major software projects work out well. Home computer users are also accustomed to crashes. Why are computer systems so unreliable and difficult?

    By contrast, the Japanese Shinkansen trains are a remarkable testimony to reliability and safety. Since they started in 1964, carrying millions of people per year, no passenger has been killed as a result of a collision, derailment, or other railway accident. Not only are the Shinkansen safe, they are also reliable. The average Shinkansen train arrives within 24 seconds of schedule. What can we learn from this?

    At one level, there are details of railway construction. The Shinkansen track is laid with heavier rail and closer-spaced cross-ties than a new line in Australia that will carry trains of twice the weight.

    At another level, safety benefits from Japanese culture. Any visitor can tell you that Japan is an extremely clean country; the Shinkansen tracks and stations are litter-free. The worst fire ever on the London Underground (King's Cross, 1987) started in debris under an escalator; cleanliness is not just cosmetic.

    But historically, Japan was not renowned for railway safety. As recently as the early 1960s, just before the Shinkansen opened, two accidents near Tokyo each killed more than 100 people. And yet safety has now become routine. The culture of safety and dependability has been learned there; it could be learned elsewhere.

    But CACM is neither a railway engineering journal nor a journal of cultural history. What should we learn about computers?

    The Japanese did not do a cost-benefit analysis on safety. Nobody sat in the Shinkansen design office and thought about how to trade off cutting construction costs against the number of people that would be killed. In the computer context, we often distribute the costs of unreliable software over a great many users, who do not easily aggregate their frustrations into economic impact. NIST recently estimated that software bugs cost the US economy $60 billion per year. Lower testing costs, more features, and shorter time to market are easier to quantify than the benefits of various elements of dependability such as safety, security, and reliability -- and may be viewed as more important by the development managers. If we care about having dependable systems, then we have to be sure that safety, security, and reliability are primary requirements whenever they are needed. These are not things that can be patched in like an extra button in an interface. Today, vendors act as if people want more features and low prices first, and dependability later.

    How can we achieve a culture of dependability? When buying a ticket to a symphony orchestra, people do not anticipate some particular percentage of wrong notes. Nobody thinks that some level of spelling errors in CACM is suitable in exchange for faster editing or student discounts. Yet we routinely accept basic undependability in computer systems.

    We have understood for a generation that having a small, terse, and limited system kernel greatly improves reliability. Yet we still see manufacturers resorting to special-purpose bypasses to make their particular program run faster or get around some blockage, with kernels swelling to tens of millions of lines of code. We still see complexity winning over simplicity.

    How do we persuade manufacturers that security must be a priority? First, we have to believe it as users. People who routinely accept downloads from almost any site and use mailers that enable executable code attachments to send 5-word ASCII strings wouldn't seem to care much about security or privacy. We need a culture change by purchasers as well as by developers. Perhaps the increased threat of cyberterrorism will reverse the trend of even security-conscious agencies to buy commercial off-the-shelf software without recognizing its risks; I hope it does so without any actual horror stories. Perhaps the recognition that simpler and more dependable systems can result in lower system administration costs, faster and fewer reboots, and lower training costs will help change the customer culture. If we can persuade manufacturers that more dependable software will pay off, and that adding more features won't enhance dependability, we might reverse a decades-long trend to greater vulnerability and lesser reliability.

    Michael Lesk is known for some Unix utilities (Lex, tbl and uucp) and for research in information retrieval. He is the author of "Practical Digital Libraries" (Morgan Kaufmann, 1997) and currently works for the Internet Archive.

    ========================================================

    Inside Risks 150, CACM 45, 12, Dec 2002

    Why Security Standards Sometimes Fail

    Avishai Wool

    Security experts have long been saying that secure systems, and especially security standards, need to be designed through an open process, allowing review by anyone. Unfortunately, even openly designed standards sometimes result in flawed cryptographic systems. A recent example is the IEEE 802.11 wireless LAN standard, in which several serious cryptographic failures were found (see [1,2,3]), after millions of flawed hardware devices were sold.

    Finding a cryptographic design flaw in an approved standard is bad news -- especially after systems using it are in wide-spread use. Such a flaw is typically very costly to fix. And, ironically, once a flawed system is widely deployed, future fixed versions of the system will almost certainly have a backward-compatibility mode, making them vulnerable as well. Cryptanalyzing the standard before it is ratified is clearly better for society and better for vendors. But is it better for the cryptanalyst? Unfortunately, we shall see that the answer is sometimes ``No''.

    Cryptanalysts are usually scientists, who make their own choices about which problems to work on. Furthermore, scientific success is measured by publications. Publishing a high-visibility scientific paper in a respected scholarly journal or conference proceedings can help establish academic fame, fortune, and tenure. So, consider a cryptanalyst, Carol, who is looking for a project to work on. Would she want to get involved in a standardization effort?

    Working on a standard has its own set of challenges. A standards body involves many parties with conflicting agendas, many of them powerful corporations. Furthermore, a standard is not measured by excellence or novelty. It should be a working design that is an acceptable compromise between the interests of all the parties involved. In short, a standards body is not an environment that encourages scientific discourse. Finally, even supposedly open standards bodies sometimes have onerous requirements, which may discourage scientists from participating.

    Suppose that despite the challenges, Carol does get involved, and finds a cryptographic flaw in the standard's draft. Would this advance her scientific career? Unfortunately, not by much. Firstly, it may be difficult for her to get the standards body to take action, because doing so might conflict with the interests of other parties. Secondly, Carol can expect very little credit for her contribution. A standard typically has no authors, and only the standard's editors are personally recognized. If Carol tries to publish a paper describing her discovery, it will surely be rejected by any respectable scientific venue: Every standard goes through drafts, many of them faulty; so, why should a specific flaw in an early draft be interesting? Finally, if the standard ends up not being used, then her work (indeed, the work of the whole standards body) would go to waste.

    Now consider what would happen if Carol finds the same flaw after the standard has been ratified, and after technology based on it is in wide-spread use. As an individual, she has much more to gain. Her work has obvious technical impact, because, by choice, the standard is already in use. She can certainly author a paper about her findings: Publishing it in a top-notch scientific venue would be relatively easy, because of the public interest. Furthermore, security vulnerabilities are considered news-worthy outside of scientific circles: Reporting services for such discoveries (such as BugTraq and CERT) have very wide readership, and stories are occasionally even reported by the general media. Such publicity is an effective way to cause Fortune-500 corporations to fix their products. All this excitement can make Carol into a star in her field.

    We see that for an individual scientist, cryptanalyzing an established standard is, potentially, much more rewarding than working to ensure that the standard is secure in the first place. Luckily for society, there are reasons why many security standards do better than IEEE 802.11. Standardization is altruistic volunteer work for many participants, and this includes cryptanalysts. Also, cryptanalysts working in corporate research labs may be well motivated to contribute to a standard. But the basic conflict between the public good and the individual scientist's interests is a cause for concern.

    [1] W.A. Arbaugh, N. Shankar, and Y.C.J. Wan, Your 802.11 wireless network has no clothes, IEEE Conference on Wireless LAN's and Home Networks, 2001.

    [2] N. Borisov, I. Goldberg, and D. Wagner, Intercepting mobile communications: The insecurity of 802.11, 7th ACM Conference on Mobile Computing and Networking, 2001.

    [3] S. Fluhrer, I. Mantin, and A. Shamir, Weaknesses in the key scheduling algorithm of RC4, 8th Workshop on Selected Areas in Cryptography, 2001.

    Avishai Wool, yash@eng.tau.ac.il, is a Senior Lecturer in the Dept. of Electrical Engineering Systems, Tel Aviv University, Ramat Aviv 69978, ISRAEL http://www.eng.tau.ac.il/~yash

    ========================================================

    Inside Risks 149, CACM 45, 11, Nov 2002

    Florida 2002: Sluggish Systems, Vanishing Votes

    Rebecca Mercuri

    Following the 2000 Presidential election debacle in Florida, government officials promised sweeping reforms that would prevent such chaos from reoccurring. Indeed, the Florida election code was extensively revised, punchcard systems were outlawed, and over $125 million was spent statewide on new voting equipment and training of voters and election administrators. What could possibly go wrong? Apparently enough calamity to cause Governor Jeb Bush to declare a state of emergency, extending the voting session by two hours for the September 10, 2002 primary election. Yet events earlier in the year should have provided sufficient forewarning of difficulties.

    Broward County purchased new touchscreen voting machines, manufactured by Election Systems & Software (ES&S), but back in February the Associated Press reported that "more than two-thirds of the first shipment had defects and will have to be repaired." The ES&S devices in Broward and Miami-Dade were those at polling places in September that failed to open on time, in part because workers had been told that the machines would take about two minutes to boot up. Instead, most took around 10 minutes, but those outfitted for the visually impaired took an astonishing 23 minutes. Although Broward Board of Elections Commissioner Miriam Oliphant and her pollworkers were later blamed by the Governor for many of the September primary woes, the fact remains that these sluggish voting systems were certified for use by the state's examiners as well as by testing agencies overseen by the National Association of State Election Directors.

    In March 2002, problems with Sequoia voting systems purchased by Palm Beach County surfaced in two local city council elections. In the city of Wellington, a run-off election involved only one race with only two candidates. The final vote tally was 1263 to 1259, but 78 ballots were not recorded by the touchscreen machines. Elections Supervisor Theresa LePore explained that people simply chose to come to the polls and not cast a vote for anyone, but this seems unlikely, and it is more probable that the machines failed to record votes that were cast.

    The other contested Palm Beach election was in Boca Raton, where former mayor Emil Danciu came in third with an 8% undervote. His suspicions regarding possible lost votes stemmed from low numbers reported in his home precinct, where he was expected to do well. During court proceedings, it was revealed that Sequoia had sold the systems under trade secret protection, making it a third degree felony for Supervisor LePore if any details regarding the specification or internal functioning of the devices were revealed. Circuit Court Judge John Wessel granted Danciu a walk-inspection of the voting equipment, where it was discovered that the pre-election testing circumvented the ballot-face and the touchscreen was used only to cast one vote for each candidate listed first in every race. Because Danciu appeared third in his race, there is no test data that can reveal whether or not the machines would properly activate and record votes cast for him. (In the Wellington election, the losing candidate appeared second, so his position was also untested.) Further disconcerting information included the fact that the voting machines are reprogrammable at the firmware level via a portal on each device, and also that at the end of the election they are frozen in a mode where one can not perform vote casting, so a functional post-test is precluded.

    Difficulties in Florida's September 2002 primary were not limited to the touchscreen systems. In Union County, the optical scanning system had been erroneously programmed to print out only Republican party results, requiring a hand-count of some 2700 ballots. At least with the paper ballots, an independent tally was possible. Over in Miami-Dade, reported undervotes of as much as 48% in some precincts in the Gubernatorial race caused Janet Reno to demand that a recount be performed. Here however, election officials reconstructed some supposedly missing votes by extracting dubiously recorded data from the touchscreen machines!

    Florida's experience may be replicated as communities rush to adopt flawed voting products and will inadvertently squander billions of dollars in public funds. National standards for design, construction and testing have lagged behind, while Voting Rights Act initiatives have stalled in Congress. Only a lengthy moratorium on new purchases of voting equipment, until these issues have truly been sorted out, can hope to restore sanity and confidence in democratic elections.

    Rebecca Mercuri (mercuri@acm.org), a professor of Computer Science at Bryn Mawr College, PA, is an expert on electronic voting systems. Her informative Web site on this subject can be found at: http://www.notablesoftware.com/evote.html

    ========================================================

    Inside Risks 148, CACM 45, 10, October 2002

    Secure Systems Conundrum

    Fred B. Schneider

    By definition, a secure system enforces some policy it is given. Such a policy might, for example, prevent your confidential files from being revealed or might notify the copyright holder every time you play an MP3 file. The former protects you as an individual; the latter enables new means of charging for electronically distributed intellectual property. Both might be seen as improvements over the status quo. Yet whether secure systems are in practice attractive really depends on two questions:

    What range of policies can the system enforce?
    Who chooses what policies the system enforces?

    Automated policy enforcement mechanisms are incapable of showing good taste, resolving ambiguity, or taking into account the broader social and political context in which a computer system exists. So formulating as a policy something that accurately reproduces our intents is likely to be impossible, and we must endure policies that conservatively disallow actions they shouldn't. An example well known to Inside Risks readers involves system policies that disallow copying CDs containing music or software even though such copying is permitted according to the ``fair use'' provisions of copyright law. In general, intent is difficult to formulate precisely as a policy that can be enforced with a secure system---witness what happens in writing laws, which too often forbid things society didn't intend or allow things it did intend to forbid.

    The question of ``Who chooses what policies are enforced?'' is tantamount to deciding who controls the computer system. On special-purpose computers (e.g., cellphones and set-top cable modems), the enforcement of policies imposed by others has not seemed offensive. Software on these devices is, for example, regularly updated and device usage monitored without user consent (or knowledge). But enforce a policy to restrict what happens on a desktop computing system, and that system might no longer be general purpose. No surprise, then, that the Trusted Computing Platform Alliance (TCPA) and other efforts concerned with hardware and operating system support for secure computing systems are controversial. The surprise is that technical details are only a small part of the picture.

    The world today is one in which computer users are either unwilling or unable to implement non-trivial security policies on their desktop computers. Do you set all those file protection bits and check digital certificates for expiration? Most often not, so the policies enforced by secure systems will likely come from elsewhere. We would hope that these policies are designed with our individual and collective best interests in mind, and we might wonder what forces will cause that to happen. The law and the market seem the likely candidates.

    The law arguably is not up to that task. Courts are having difficulty applying our current laws to cyberspace---witness the debate associated with interpreting copyright's ``fair use''. Moreover, digital rights management is but one class of policies our secure systems might be enforcing. New laws might be the answer, but then recent U.S. (and some EU) experiences do not bode well for the public good.

    Perhaps the market could provide the incentives. However, this presumes a user or owner is free to choose which policy is enforced on a given computer. It also presumes that the market is open to would-be policy providers. Neither is guaranteed, and there are good reasons why neither might hold. The producer of a secure system has an incentive to provide a policy that prevents other policies from being added and other producers' software from being run.

    Even if the computer owner were completely free to choose among policies, a digital content provider will likely require certain policies to be present on any computer accessing their content. The free choice then becomes one of choosing between desired content and desirable policy---not much of a choice.

    Insecure systems today allow users to circumvent copyright restrictions, license agreements, and the law. Sometimes this circumvention is done in ignorance; sometimes it is done in protest; but sometimes it is done because the policy being enforced is clumsy and forbids something it shouldn't. In short, circumventing policy enforcement serves as a much needed relief valve for clumsy policies.

    Without a doubt, deploying secure systems has risks. Individuals are unlikely to be better off with secure systems unless the way has been prepared with careful attention to who controls the policies these systems enforce and what values those policies reflect. And if the so-called secure systems have vulnerabilities---as software systems so often do---malevolent users will still be able to do things they shouldn't, whereas ordinary users will have lost their means to compensate for clumsy policies.

    Fred B. Schneider is a professor and director of the Information Assurance Institute at Cornell University.

    ========================================================

    Inside Risks 147, CACM 45, 9, September 2002

    Risks of Digital Rights Management

    Mark Stamp, September 2002

    Digital rights management (DRM) is an attempt to provide ``remote control'' over digital content. The required level of protection goes beyond secure delivery of the bits -- restrictions on the use of the content must be maintained after it has been delivered. The buzzword for this is ``persistent protection.''

    For example, a digital book can be delivered over the Internet using standard cryptographic techniques. But if the recipient can save the book in an unrestricted form, he can then freely redistribute a perfect copy to a large percentage of the population of earth. This reality has led publishers to forego the potentially lucrative sale of digital books and has had a similar chilling effect on the legitimate distribution of other types of digital content. Robust DRM would, among other things, enable copyright holders to take full advantage of the Internet without having to rely on the honor system. (Recall Stephen King's experiment with an on-line book.)

    Effective DRM would have other far-reaching implications. For example, armed with strong DRM, an individual could maintain tight control over his personal data, thus providing for a measure of online privacy and confidentiality that is currently lacking. Some companies even claim that their proprietary DRM system provides unbreakable persistent protection transitively and indefinitely, in some cases even with remote rights revocation.

    Most DRM companies emphasize their robust cryptography, often implying that this is enough to ensure security. One company even maintains it is the only one with export permissions for its 256-bit crypto (stronger than most of its competition). While cryptography is an essential part of DRM, it can do little to ensure the higher level of persistent protection required. At some point, cryptographic keys will be in the possession of the legitimate recipient, who also happens to be the most likely attacker of the persistent protection. Though it has its own set of risks, cryptography is the easy part of any comprehensive DRM solution.

    Then what technology can be brought to bear on the persistent protection part of the DRM equation? Unfortunately, there is little useful design information available on implemented systems, although there is ample senseless marketing hype. In the field of security, experience has taught that full disclosure is essential. For example, cryptographers do not trust a cryptosystem until it has been publicly vetted and subject to intense scrutiny by the cryptographic community. This reluctance to accept cryptographic algorithms at face value comes from the long list of ``secure'' algorithms that have been broken. In DRM there is, as yet, no such imperative to make the design of systems---even in a general form---available for public scrutiny. At the very least, this suggests that the level of security provided by such systems is suspect, since those making the security claims have a financial interest in boosting their perceived level of security.

    The track record of fielded DRM systems is also not reassuring. For example, Adobe eBooks was easily broken, although Adobe made no real efforts at protection. Although Microsoft's MS-DRM security went a little further, it too offered little challenge to a persistent attacker. Of course, the simple reverse engineering required to break such schemes inspired the Digital Millennium Copyright Act (DMCA) prosecution of Dmitry Sklyarov and his employer, ElcomSoft. (Proposed legislation could result in life imprisonment for similar acts.)

    The DRM market has been estimated at $3.5 billion by 2005. Not surprisingly, a large number of companies are vying for a slice of this enormous pie. Unfortunately, the technology behind current DRM systems is little more than glorified security by obscurity. But this awkward reality has not prevented companies from making strong claims about the security of their products -- claims that cannot be supported either by their known design features or by the real world performance of comparable systems.

    The future of DRM appears to lie in the direction of tamper-resistant hardware, which promises to be a far more effective solution. Ironically, such an approach threatens to move the ``remote controllability'' from users to third parties, carrying its own set of risks. Regardless, the current state of digital rights management technology falls far short of what is required to deliver on the promise of DRM. The risk today lies in not recognizing this reality.

    Mark Stamp (mstamp1@earthlink.net) spent 2 years designing a DRM system for MediaSnap Inc. He is now an independent consultant in Silicon Valley. His paper, Digital Rights Management: The technology behind the hype http://home.earthlink.net/~mstamp1/papers/DRMpaper.pdf, includes many references.

    ========================================================

    Inside Risks 146, CACM 45, 8, August 2002

    Risks in Features vs. Assurance

    Tolga Acar and John R. Michener, August 2002

    Essentially all commercial computer systems development and deployment have been driven by concerns for time-to-market, novel features, and cost -- with little if any concern for assurance, reliability, or the avoidance of system security vulnerabilities in networked environments. Retrofitted products for the new connected world usually expose new vulnerabilities, because the environment changes as a result. Security features of an existing product may not adequately address new risks, because new security policies take effect. New features introduce unforeseen interactions between various components, invalidating previous assurance.

    Software and systems currently are related to risks under contract law rather than any more demanding liability laws. The contracts are typically extremely inequitable, with purchasers assuming all liabilities, despite the practical impossibility of their assessing the security, reliability, or survivability characteristics of the products they are buying. Indeed, most software products come with an anti-warranty: the producer warrants nothing, and customers assume all liabilities.

    There is a serious lack of understanding among developers and development managers that security and survivability are different from features. Self-promoted and self-assigned ``security experts'' (often without a comprehensive understanding of security issues) often lead to security features that are promised, but poorly conceived and poorly understood. It has been to the industry's advantage to position ``security'' as a feature that is added to other systems and computing complexes, rather than primarily a characteristic of thoughtful architecture, careful design, and meticulous engineering, coding, testing, and operation. Security does not result from modules that are added after-the-fact; it must be engineered in from the beginning. The CERT/CC database contains numerous vulnerabilities such as buffer overflows and password sniffing that are often consequences of basic system architecture and design, some of which cannot easily be retrofitted. Products are shipped with many features, but assurance is at best paid only lip-service as part of the vendor's marketing campaign.

    Lack of commercial preference regarding security favors feature-laden software and frequent shipping schedules; too often, it is more important to ship a product with promised features in the commercial world. Although this may seem to be the failure or ignorance of the industry, these features are often requested by customers who don't necessarily have an in-depth understanding of security, but have security vaguely in mind. If a vendor can't deliver in time or doesn't offer a feature for sound security reasons, the customer finds another vendor that can. The industry is not interested in research and development without payoffs. As long as the customer takes the risks, there is much reduced incentive, and a great incentive to offer ``nifty'' features, even if these features increase the vulnerability to compromise. Evaluation of a product against a Common Criteria protection profile (for example) is not in the list of most customers.

    The feature-dominated production ignores security experts' warnings, which become the first sacrifice in crunch time in development organizations -- justified by the motto ship happens. Similar sacrifice is carried out by customers demanding functionality in a short time.

    Although gaining more interest, cryptography, computer security, and survivability are not widespread. Security issues covered in most operating system and software engineering courses can be improved. It may not be possible to expand the existing courses and squeeze more concepts within the same time frame. Instead, separate computer-security courses might be added as is already done by some universities. But, one way or the other, security and software engineering need to be thoroughly integrated into the curriculum.

    Inadequate testing of features and their myriad interactions generally relegates testing to a final screen. Hardware designers have long implemented design-for-testability rule sets and supporting integral test hardware (which may occupy more that 5% of a chip). Testing engineers should be involved at the inception of development to make sure that issues relating to testability and reliability are properly addressed.

    The software and systems industry has been allowed to develop without substantial legal oversight, under the assumption that its customers were sophisticated and could manage their risk exposure appropriately. Unfortunately, even sophisticated customers cannot know their security exposure. Under such conditions, liability law may be held to override unjust contract disclaimers. If the industry will not clean up its act, it must expect the tort bar to do so.

    Dr. Acar is a senior software engineer at Novell, Inc.: tacar@novell.com Dr. Michener is a consulting engineer at Enterprises, Inc.: jrmichener@ieee.org

    ========================================================

    Inside Risks 145, CACM 45, 7, July 2002

    Risks: Beyond the Computer Industry

    Donald A. Norman

    As computer technologies increasingly invade everyday products, the RISKS of the traditional computer business must be revisited by each new industry, usually through failures. Issues include reliability of code, protection against component failure, security of data, privacy and security, safety, maintenance, and upkeep. There is one issue that affects all of these topics: ease of use. Poor usability leads to high support costs, high error rates, and increased injuries.

    Consider the automobile, which is certainly a popular target for new technology to assist driving, enhance entertainment, and facilitate business activities. Usually driving does not require full concentration, but situations that require full attention typically arise without warning. What might be a minor secondary task under normal driving conditions can suddenly become life-threatening.

    In modern cars the number of controls in front of the driver has proliferated to an unacceptable extent. BMW addressed the complexity issue with their new iDrive Controller, available in the 7 series sedan. Their solution was to replace most of the controls, knobs and displays of the dashboard with a single knob and display screen. BMW states that ``this user-friendly interface offers quick access to over 700 settings.''

    When one control does multiple operations, it requires a complex menu structure and choice of modes, which in turn promotes mode errors and other sources of error. It is best to have dedicated controls for critical functions, even at the expense of more buttons and knobs. Unfortunately, there is a design tradeoff between simplicity in appearance and simplicity in use. This is a dangerous design trap. Alas, consumers (and organizations) make purchase decisions based on appearances more than reality, so this is a fundamental conflict. But BMW did not have to choose between one knob and display screen or 720 separate controls: there are alternative designs between these extremes.

    BMW's user interface has been soundly trashed in the press. Let us hope they pay heed and hire professionals from the Human-Computer Interaction community (e.g., www.acm.org/sigchi) to help redesign their approach, from the initial assumptions upwards: this cannot be fixed with a simple patch or some new graphics.

    A very different problem is that faced by hotels. Business travelers expect high-speed Internet access, but the technologies of Internet connection make configuration overly difficult. Internet connections require setting numerous parameters. Worse, these change from location to location, ISP to ISP. My personal experience is that the installations seldom work completely at first. Although once connected it is possible to read email from POP servers, it is usually impossible to send without multiple telephone calls to service providers to get the SMTP information. SMTP was not designed with security in mind, so most ISPs will not send mail from foreign sites, forcing the traveler to negotiate the morass of unknown ISP providers from hotel to hotel. It is time to advance from the current SMTP toward a new standard that allows one setting to work from any location, much as POP servers now allow.

    Security is a major issue. A large number of intermediaries have arisen to increase security, including software firewalls, proxy servers and VPNs. Most travelers and hotel staff are insufficiently knowledgeable to navigate through these roadblocks. And everyone who changes their settings, successfully or not, faces the daunting task of resetting them afterwards.

    Yet another problem area is the proliferation of services on telephone systems. Twenty years ago I suggested that the only solution was more dedicated buttons plus display screens to guide the operations in simple language. We now have more buttons and screens, but simple language still eludes many design teams, probably because the writing is seldom done by professional technical writers. Cellphones complicate the story: as the number of functions increases, size and power constraints leave little room for more buttons or larger screens.

    As computer technologies migrate to other industries, ACM faces a growing challenge to promulgate appropriate human-centered development processes. More and more of the RISKS from technology come from deficient consideration of people, organizations, and cultures. ACM has taken small steps toward changing the balance. But as computers pervade the fabric of every human activity, more emphasis is required. Otherwise, the existing known RISKS will simply proliferate beyond imagination.

    Donald A. Norman (norman@nngroup.com) is professor of Computer Science at Northwestern University and co-founder and principal of the Nielsen Norman group. He is author of The Invisible Computer.

    ========================================================

    Inside Risks 144, CACM 45, 6, June 2002

    Free Speech Online and Offline

    Ross Anderson

    Esther Dyson argued that as the world will never be perfect, whether online or offline, it is foolish to expect higher standards on the Internet than we accept in `real life'. Legislators are now turning this argument around, and arguing that they have to restrict traditional offline freedoms in order to regulate cyberspace.

    A shocking example is an export-control bill currently in Britain's parliament. The government version would enable the government to impose licensing restrictions on collaborations between scientists in the UK and elsewhere; to take powers to review and suppress scientific papers prior to publication; and even to license foreign students in British universities. By a large majority, the House of Lords amended it to exclude material in the public domain and information exchanged in the normal course of academic teaching and research. It has now gone back to the House of Commons, where ministers say they will amend it back again. This fight could go on for weeks.

    During the late 1990s, arms-export regulations prevented US nationals making cryptographic software available on their Web pages, or emailing it abroad. Phil Zimmermann, the author of PGP, was investigated by a Grand Jury for letting it `escape' to the Internet. The law was ridiculed by students wearing T-shirts printed with encryption source code (`Warning: this T-shirt is a munition!'), and challenged in the courts as an affront to free speech.

    For government, there was a risk that crypto software would escape. For liberty, there was a risk we ignored at the time: that the bad policy would escape. Although the Clinton administration later abandoned its approach as unworkable, that did not stop other governments trying to ape it. After Tony Blair was elected in 1997, he tried to take Britain down the American path; after much protest and many battles, the current bill is the result. His attempt to have a law with no embarrassing loopholes has resulted in one that is absolutely draconian. For example, someone accidentally learning the wrong type of secret can be prevented from ever leaving the UK. (The Lords amended the Bill to remove this unpleasantness; the government says it will reinstate it.) While this particular fight is mainly a matter for Brits, it is an example of a wider and worrying trend -- toxic overspill from attempts to regulate the Internet.

    There are many more examples. In the USA, Hollywood's anxiety about digital copying led to the Digital Millennium Copyright Act. This gives special status to mechanisms enforcing copyright claims: circumvention is now an offense. So, manufacturers are now bundling copyright protection with even more objectionable mechanisms, such as accessory control. For example, one games-console manufacturer builds into its memory cartridges a chip that performs some copyright control functions but whose main purpose appears to be preventing other manufacturers from producing compatible devices. There is no obvious way to reconcile the tension between competition and public policies on copyright.

    Meanwhile, worries about cybercrime are leading to a Europe-wide arrest warrant that overturns the time-honored principle of dual criminality, i.e., you can be extradited from one country to another only if there is prima facie evidence that you've committed a crime according to the laws of both countries. Now Germany has strict hate-speech laws (`Mein Kampf' is a banned book), while Britain does not. Right now, I could put an excerpt from that book on my website in the UK (or the USA) but not in Germany. But the new arrest warrant would allow the German police to extradite me from Britain, for an offense that doesn't exist in British law. Thus, free-speech rights online may be reduced to the lowest common denominator among the signatory nations.

    Why do we get so many bad laws about information? Many of them have to do (in some broad sense) with risks: with the perceived vulnerability of the Internet to hackers, bomb makers, credit-card thieves, pornographers, and other undesirables. There is massive hype from the computer security industry; when people get fed up with hearing about hackers, the risk changes to `cyberterrorism'. There are few or no balancing voices, as the interests of almost everyone involved in the security industry (vendors, government agencies, regulators, researchers) lie in talking up the threats. Journalists like the scare stories more than the rebuttals. Politicians and bureaucrats use them to build empires. After the .com boom, we are seeing the .gov version; and there's no sign of it bursting any time soon.

    We need better ways of dealing with risks realistically at the political level. Does that mean simply educating the public about risks, or do we need something else too?

    Ross Anderson heads the security group in the Computer Laboratory at Cambridge University in the UK http://www.cl.cam.ac.uk/users/rja14; Ross.Anderson@cl.cam.edu.uk

    ========================================================

    Inside Risks 143, CACM 45, 5, May 2002

    Risks of Inaction

    Lauren Weinstein

    Scientists and technologists create a variety of impressions in the eyes of society at large, some positive and others negative. In the latter category is the perception (often clearly a mischaracterization) that many individuals in these occupations are not involved with society in positive ways, making them easy to target for many of society's ills.

    It's not difficult to see how this simplistic stereotype developed. We technically-oriented folks can easily become so focused on the science and machines that we willingly leave most aspects of the deployment and use of our labors to others, who often don't solicit our advice -- or who may even actively disdain it.

    In the broad scope of technology over the centuries, there have been many innovators who lived to have second thoughts about their creations. From the Gatling gun to nuclear bombs and DNA science, the complex nature of the real world can alter inventions and systems in ways that their creators might never have imagined.

    It of course would be unrealistic and unwise for us to expect or receive total control over the ways in which society uses the systems we place into its collective hands. However, it is also unreasonable for the technical and scientific minds behind these systems to take passive and detached roles in the decision-making processes relating to the uses of their works.

    Within the computer science and software arenas, an array of current issues would be well served by our own direct and sustained inputs. The continuing controversies over the already-enacted Digital Millennium Copyright Act (DMCA) is one obvious example. Even more ominously, the newly proposed Consumer Broadband and Digital Television Promotion Act (CBDTPA), formerly known as the Security Systems Standards and Certification Act (SSSCA), is a draconian measure; it would greatly impact the ways in which our technologies will be exploited, controlled, and in some cases severely hobbled. We never planned for digital systems to create a war between the entertainment industry, the computer industry, and consumers, but in many ways that's what we're now seeing.

    Controversies are raging over a vast range of Internet-related issues, from the nuts and bolts of technology to the influence of politics. Concerns about ICANN (the Internet Corporation for Assigned Names and Numbers) -- the ersatz overseer of the Net -- have been rising to a fever pitch.

    Throughout all of these areas and many more, critical decisions relating to technology are frequently being made by politicians, corporate executives, and others with limited technical understanding -- frequently without any meaningful technological inputs other than those from paid lobbyists with their own selfish agendas.

    The technical and scientific communities do have associations and other groups ostensibly representing their points of view to government and others. But all too often the pronouncements of such groups seem timid and not particularly ``street-savvy'' in their approaches. Fears are often voiced about sounding too un-academic or expressing viewpoints on ethical matters rather than on technology or science itself, even when there is a clear interrelationship between these elements. Meanwhile, the lobbyists, who have the financial resources and what passes for a straight-talking style, have the ears of government firmly at their disposal.

    Computers and related digital technologies have become underpinnings of our modern world, and in many ways are no less fundamental than electricity or plumbing. However, it can be devilishly difficult to explain their complex effects clearly and convincingly to the powers-that-be and the world at large.

    As individuals, most of us care deeply about many of these issues -- but that is not enough. We must begin taking greater responsibility for the manners in which the fruits of our labors are used. We need to take on significantly more activist roles, and should accept no less from the professional associations and other groups that represent us. If we do not take these steps, we will have ceded any rights to complain.

    Lauren Weinstein (lauren@privacy.org) is co-founder of People For Internet Responsibility http:www.pfir.org. He is moderator of the Privacy Forum http://www.vortex.com and a member of the ACM Committee on Computers and Public Policy.

    ========================================================

    Inside Risks 142, CACM 45, 4, April 2002

    Digital Evidence

    David WJ Stringer-Calvert

    Those of you concerned with privacy issues and identity theft will be familiar with the concept of dumpster diving. Trash often reveals the dealings of an individual or a corporation. The risks of revealing private information through the trash has led to a boom in the sale of paper shredders (and awareness of the risks of reassembling shredded documents). However, how many of us take the same diligent steps with our digital information?

    The recovery of digital documents in the Enron case and the use of e-mail in the Microsoft anti-trust case have brought these concerns into the fore. For example, we are all more aware of the risks inherent in the efficient (``lazy'') method of deleting files used by modern operating systems, where files are `forgotten about' rather than actually removed from the drive.

    There will certainly be an increase in the sales of `wiper' software following this increase in awareness, but that's not the end of the story. Overwriting data merely raises the bar on the sophistication required of the forensic examiner. To ensure reliable data storage, the tracks on hard-drive platters are wider than the influence of the heads, with a gap (albeit small) between tracks. Thus, even after wiper software has been applied, there may still be ghosts of the original data, just partially obscured.

    So, what more can we do? Clearly, we are in a tradeoff between the cost to the user and the cost to the investigator. To take the far extreme, we would take a hammer to the drive and melt down the resulting fragments, but this is infeasible without a large budget for disks.

    One could booby-trap the computer, such that if a certain action isn't taken at boot time, the disk is harmed in some way. Forensics investigators are mindful of this, however, and take care to examine disks in a manner that does not tamper with the evidence. If we're open to custom drives, we could push the booby-trap into the drive hardware, causing it to fail when hooked up to investigative hardware (or, more cunningly, produce a false image of a file-system containing merely innocent data).

    Another approach is to consider file recovery as a fait accompli and ensure that the recovered data is not available as evidence. Encryption clearly has a role to play here. An encrypting file-system built into your operating system can be helpful, but may provide only a false sense of security -- unless you have adequate assurance of its cryptanalytic strength (which is likely to be weakened if there is common structure to your data) and the strength of the underlying operating systems. Per-file encryption with a plurality of keys might help, but that begs the question of key management and key storage.

    One could consider possible key escrow, backdoors, and poorly implemented cryptography software to be below your paranoia threshold. Another useful step can be secret sharing (A. Shamir, "How to Share a Secret", Comm.ACM 22, 11, 612--613, November 1979). Spread your data in fragments around the network such that k of the n fragments are required to be co-located to decipher the original file. In a carefully designed system, any k-1 fragments yield no useful insight into the contents of the file; k and n can be tuned according to the paranoia required, including the placement of no more than k-1 within the jurisdiction of the investigating agency.

    Clearly, there are a number of steps we can take to push the evidence as far as possible beyond the reach of those who might use it to incriminate us. But one question not often raised in this topic is why should we bother? Given the lack of strong authentication in most computing systems, it's not beyond reasonable doubt that the files in question are not even yours.

    Furthermore, there are many risks of trusting recovered digital evidence, given the ease with which digital documents can be fraudulently created, modified, or accidentally altered, or their time stamps manipulated. Corroboration by independent sources of evidence is usually required to establish a case, even for non-digital evidence, although when all of these corroborating sources of evidence are digital, the risks remain. See, for example, discussion of the potential holes in evidence in the case of the Rome Labs Intrusion in 1994 (Peter Sommer, "Intrusion Detection Systems as Evidence", BCS Legal Affairs Committee, March 2000.

    Inside Risks 141, CACM 45, 3, March 2002

    Risks of Linear Thinking

    Peter J. Denning and James Horning

    For over half a century we have classified research on a scale from basic to applied. Basic research is a quest for fundamental understanding without regard to potential utility. Applied research is technology development that solves near-term problems. These two models have different diffusion times from research result to practice -- often 20-50 years for basic research and 2-3 years for applied. Because the return on investment of basic research is so far in the future, the Federal government is the main sponsor and university faculty are the main investigators.

    For over a generation we have classified software development on a scale from technology- centered to human-centered. Technology-centered work is focused on advancing software technology with new functions, algorithms, protocols, and efficiencies. Human-centered work is focused on making software more useful to those paying for or using it.

    These two one-dimensional (linear) scales create false dichotomies, obscure fundamental issues, and encourage tensions that hurt research and software development.

    Most of our academic departments place high value on basic and technology-centered work. Faculty who do applied or human-centered projects often find themselves disadvantaged for tenure and promotion and occasionally the object of scorn. Most eventually toe the line or leave the academy. (See National Research Council, Academic careers for experimental computer scientists, NRC Press, 1994.) The resulting bias prevents us from valuing and teaching the full range of vital software development topics. Many of the risks discussed in this forum over the years will never be fully addressed as long as this bias persists.

    In 1997, Donald Stokes (Pasteur's Quadrant: Basic Science and Technological Innovation, Brookings Institution, 1997 http://www.brook.edu) put the research issue into a new light. He traced the conceptual problem back to Vannevar Bush, who in 1945 coined the term basic research, characterized it as the pacemaker of technological process, and claimed that in mixed settings applied research will eventually drive out basic. Bush thus put the goals of understanding and use into opposition, a belief that is at odds with the actual experience of science. Stokes proposes that we examine research in two dimensions, not one:
    * Inspired by considerations of use?
    * Quest for fundamental understanding?

    He names the (yes,yes) quadrant Pasteur's, the (no,yes) quadrant Bohr's, and the (yes,no) quadrant Edison's. He did not name the (no,no) quadrant, although some will recognize this quadrant as the home of much junk science.

    Those who favor applied research call for greater emphasis on Pasteur's+Edison's quadrants, and those who favor basic, on Bohr's+Pasteur's. In fact, most of the basic-versus-applied protaganists will, if shown the diagram of four quadrants, agree that these three correspond to vital sectors of research, none of which is inherently superior to the others.

    A similar model can be applied to software development. Here the common belief is that the attention of the designer can either be focused on the technology itself or on the user, or somewhere in between. Michael Dertouzos (The Unfinished Revolution, Harper Collins, 2001) recently documented 15 chronic design flaws in software and said that they will be eliminated only when we learn human-centered design, design that seeks software that serves people and does not debase or subvert them. Dertouzos called for his fellow academics to teach human-centered design and not to scorn software developers who interact closely with their customers. Some critics incorrectly concluded that he therefore also supported reducing attention to the world of software technology. However, we can view software development in two dimensions, rather than one:
    * Inspired by considerations of utility and value?
    * Seeks advancement of software technology?

    Three of these quadrants correspond to important software development sectors:

    (yes,yes) -- projects to create new technologies in close collaboration with their customers (examples: MIT Multics, AT&T Unix, Xerox PARC Alto, IBM System R, World Wide Web).

    (yes,no) -- projects to employ existing knowledge to solve human problems (examples: Harlan Mills' work, CHI, much application development).

    (no,yes) -- projects to create new software technologies for their intrinsic interest (examples: many university research projects)

    The final (no,no) quadrant is the home of many projects purely for the amusement of the developer. Many software developers will agree that the first three quadrants are all important and that none is inherently superior to the others. Perhaps this two-dimensional interpretation will help unstick our thinking about software development.

    Peter Denning (pjd@cs.gmu.edu) has contributed to ACM for many years in many capacities. Jim Horning (horning@acm.org) has been involved in computing research for more than 30 years, and is presently at InterTrust Technologies.

    ========================================================

    Inside Risks 140, CACM 45, 2, February 2002

    The Homograph Attack

    Evgeniy Gabrilovich and Alex Gontmakher

    [NOTE: Choose an appropriate Cyrillic character set for your browser for this column only, if your browser does not recognize the russian for gazeta.ru and the russian c and o in microsoft.]

    Oldtimers remember slashes (/) through zeros [or through the letter O where there was no difference] in program listings to avoid confusing them with the letter O. This has long been obsoleted by advances in editing tools and font differentiation. However, the underlying problem of character resemblance remains, and has now emerged as a security problem.

    Let us begin with a risks case. On April 7, 2000, an anonymous site published a bogus story intimating that the company PairGain Technologies (NASDAQ:PAIR) was about to be acquired for approximately twice its market value. The site employed the look and feel of the Bloomberg news service, and thus appeared quite authentic to unsuspecting users. A message containing a link to the story was simultaneously posted to the Yahoo message board dedicated to PairGain. The link referred to the phony site by its numerical IP address rather than by name, and thus obscured its true identity. Many readers were convinced by the Bloomberg look and feel, and accepted the story at face value despite its suspicious address. As a result, PairGain stock first jumped 31%, and then fell drastically, incurring severe losses to investors. A variant of this hoax might have used a domain named BL00MBERG.com, with zeros replacing os. However, forthcoming Internet technologies have the potential to make such attacks much more elusive and devastating.

    A new initiative, promoted by a number of Internet standards bodies including IETF and IANA, allows one to register domain names in national alphabets. This way, for example, Russian news site gazeta.ru (gazeta means newspaper in Russian) might register a more appealing word in Russian. ("газета.ру"). The initiative caters to the genuine needs of non-English-speaking Internet users, who currently find it difficult to access Web sites otherwise. Several alternative implementations are currently being considered, and we can expect the standardization process to be completed soon.

    The benefits of this initiative are indisputable. Yet the very idea of such an infrastructure is compromised by the peculiarities of world alphabets. Revisiting our newspaper example, one can observe that Russian letters a,e,p,y are indistinguishable in writing from their English counterparts. Some of the letters (such as a) are close etymologically, while others look similar by sheer coincidence. (As it happens, other Cyrillic languages may cause similar collisions.)

    With the proposed infrastructure in place, numerous English domain names may be homographed -- i.e., maliciously misspelled by substitution of non-Latin letters. For example, the Bloomberg attack could have been crafted much more skillfully, by registering a domain name bloomberg.com, where the letters o and e have been faked with Russian substitutes. Without adequate safety mechanisms, this scheme can easily mislead even the most cautious reader.

    Sounds frightening? Here is something more scary.

    One day John Hacker similarly imitates the name of your bank's Web site. He then uses the newly registered domain to install an eavesdropping proxy, which transparently routes all the incoming traffic to the real site. To make the bank's customers go through his site, John H. hacks several prominent portals which link to the bank, substituting the bogus address for the original one. And now John H. has access to unending streams of passwords to bank accounts. Note that this plot can be in service for years, while customers unfortunate enough to have bookmarked the new link might use it forever.

    Several approaches can be employed to guard against this kind of attack. The simplest fix would indiscriminately prohibit domain names that mix letters from different alphabets, but this will block certainly useful names like CNNenEspanol.com with a tilde over the last n. More practically, the browser can highlight international letters present in domain names with a distinct color, although many users may find this technique overly intrusive. A more user-friendly browser may highlight only truly suspicious names, such as ones that mix letters within a single word. For additional security, the browser can use a map of identical letters to search for collisions between the requested domain and similarly written registered ones.

    Caveat: To demonstrate the feasibility of the described attack, we registered a homographed domain name http://www.microsoft.com with corresponding Russian letters instead of c and o: http://www.miсrоsоft.com While this name may be tricky to type in, you can conveniently access it from http://www.cs.technion.ac.il/~gabr/papers/homograph.html.

    (Predictably, MICR0S0FT.com, MICR0SOFT.com, and MICROS0FT.com are already registered, as is BL00MBERG.com. John H. has not been wasting his time.)

    So, next time you see microsoft.com, where does it want to go today?

    Evgeniy Gabrilovich (gabr@acm.org) and Alex Gontmakher (gsasha@cs.technion.ac.il) are Ph.D. students in Computer Science at the Technion -- Israel Institute of Technology. Evgeniy is a member of the ACM and the IEEE; his interests involve computational linguistics, information retrieval, and machine learning. Alex's interests include parallel algorithms and constructed languages.

    ========================================================

    Inside Risks 139, CACM 45, 1, January 2002

    Uncommon Criteria

    Rebecca Mercuri

    The software development process can benefit from the use of established standards and procedures to assess compliance with specified objectives, and reduce the risk of undesired behaviors. One such international standard for information security evaluation is the Common Criteria (CC, ISO IS 15408, 1999, http://csrc.nist.gov/cc). Although use of the CC is currently mandated in the United States for government equipment (typically military-related) that processes sensitive information whose ``loss, misuse, or unauthorized access to or modification of which could adversely affect the national interest or the conduct of Federal programs'' (Congressional Computer Security Act of 1987), it has been voluntarily applied in other settings (such as health care). In the USA, oversight of CC product certification is provided by the National Institute of Standards and Technologies (NIST).

    The goal of the CC is to provide security assurances via anticipation and elimination of vulnerabilities in the requirements, construction, and operation of information technology products through testing, design review, and implementation. Assurance is expressed by degrees, as defined by selection of one of seven Evaluation Assurance Levels (EALs), and then derived through assessment of correct implementation of the security functions appropriate to the level selected, and evaluation in order to obtain confidence in their effectiveness.

    However, the use of standards is not a panacea, because product specifications may contain simultaneously unresolvable requirements. Even the CC, which is looked upon as a 'state of the art' standard, disclaims its own comprehensiveness, saying that it is ``not meant to be a definitive answer to all the problems of IT security. Rather, the CC offers a set of well understood security functional requirements that can be used to create trusted products or systems reflecting the needs of the market.'' As it turns out, the CC methodology falls short in addressing and detecting all potential design conflicts.

    This major flaw of the CC is directly related to its security functional requirement hierarchy. In selecting an EAL appropriate to the product under evaluation, the CC specifies numerous dependencies among the items necessary for implementing a level's criteria of assurance. In essence, it formulates a mapping whereby if you choose to implement X, you are required to implement Y (and perhaps also Z, etc.). But the CC fails to include a similar mapping for counter-indications, and does not show that if you implement J then you can not implement K (and perhaps also not L, etc.).

    A good example of how this becomes problematic arises when both anonymity and auditability are required. The archetypical application of such simultaneous needs occurs in off-site election balloting, but one can also find this in such arenas as Swiss-style banking or AIDS test reporting. If the CC process were to be used with voting (to date, no such standards have been mandated, but NIST involvement is now being considered), it must assure that each ballot is cast anonymously, unlinkably, and unobservably, protecting the voter's identity from association with the voting selections. Because access to the ballot-casting modules requires prior authentication and authorization, pseudonymity through the use of issued passcodes seems to provide a plausible solution. But the CC does not indicate how it is possible to maintain privacy while also resolving the additional requirement that all aliases must ultimately be traceable back to the individual voters in order to assure validity.

    Furthermore, the need for anonymity precludes the use of traditional transaction logging methods for providing access assurances. Randomized audit logs have been proposed by some voting system vendors, but equipment or software malfunction, errors, or corruption can easily render these self-generated trails useless. Multiple electronic backups provide no additional assurances, since if the error occurs between the point of user data entry and the writing of the cast ballot, all trails would contain the same erroneous information. Pure anonymity and unlinkability, then, are possible only if authentication and authorization transactions occur separately from balloting, but this is difficult to achieve in a fully-electronic implementation.

    The remedy to this and other such flaws in the CC involves augmentation with extensions that go beyond the current standard. For voting, one solution is to produce voter-verified paper ballots for use in recounts. Thus, the use of the CC in the secure product development cycle is encouraged, but prudent application and consideration of risks imposed by conflicting requirements is also necessary.

    Rebecca Mercuri (mercuri@acm.org) is an assistant professor of computer science at Bryn Mawr College with a PhD from the University of Pennsylvania. Her dissertation, Electronic Vote Tabulation Checks and Balances, contains a detailed discussion of the common criteria evaluation process. See http://www.notablesoftware.com/evote.html for further information, including a computer security checklist.

    ========================================================

    Inside Risks 138, CACM 44, 12, December 2001

    Risks of National Identity Cards

    Peter G. Neumann and Lauren Weinstein

    In the wake of September 11th, the concept of a National Identity (NID) Card system has been getting considerable play, largely promoted by persons who might gain financially or politically from its implementation, or by individuals who simply do not understand the complex implications of such a plan. Authentic unique identifiers do have some potentially useful purposes, such as staving off misidentifications and false arrests. However, there are many less-than-obvious risks and pitfalls to consider relating to the misuse of NID cards.

    In particular, we must distinguish between the apparent identity claimed by an NID and the actual identity of an individual, and consider the underlying technology of NID cards and the infrastructures supporting those cards. It's instructive to consider the problems of passports and drivers' licenses. These supposedly unique IDs are often forged. Rings of phony ID creators abound, for purposes including both crime and terrorism. Every attempt thus far at hardening ID cards against forgery has been compromised. Furthermore, insider abuse is a particular risk in any ID infrastructure. One such example occurred in Virginia, where a ring of motor-vehicle department employees was issuing unauthorized drivers' licenses for a modest fee.

    The belief that ``smart'' NID cards could provide irrefutable biometric matches without false positives and negatives is fallacious. Also, such systems will still be cracked, and the criminals and terrorists we're most concerned about will find ways to exploit them, using the false sense of security that the cards provide to their own advantage -- making us actually less secure as a result!

    Another set of risks arise with respect to the potentials for abuse of the supporting databases and communication complexes that would be necessary to support NIDs -- card readers, real-time networking, monitoring, data mining, aggregation, and probably artificially intelligent inference engines of questionable reliability. The opportunities for overzealous surveillance and serious privacy abuses are almost limitless, as are opportunities for masquerading, identity theft, and draconian social engineering on a grand scale.

    The RISKS archives relate numerous examples of misuses of law enforcement, National Crime Information, motor vehicle, Social Security, and other databases, by authorized insiders as well as total outsiders. RISKS readers may be familiar with the cases of the stalker who murdered the actress Rebecca Schaeffer after using DMV data to find her, and the former Arizona law enforcement officer who tracked and killed an ex-girlfriend aided by insider data. The US General Accounting Office has reported widespread misuse of NCIC and other data. Social Security Number abuse is endemic.

    Seemingly high-tech smart-card technology has been compromised with surprisingly little high-tech effort. Public-key infrastructures (PKI) for NID cards are also suspect due to risks in the underlying computer infrastructures themselves, as noted in the January/February 2000 columns on PKI risks. Recall that PKI does not prove the identity of the bearers -- it merely gives some possible credence relating to the certificate issuer. Similar doubts will exist relating to NID cards and their authenticity. The November 2000 RISKS column warned against low-tech subversions of high-tech solutions via human work-arounds, a major and highly likely pitfall for any NID.

    The NID card is touted by some as a voluntary measure (at least for U.S. citizens). The discriminatory treatment that non-card-holders would surely undergo makes this an obvious slippery slope -- the cards would likely become effectively mandatory for everyone in short order, and subject to the same abuses as other more conventional IDs. The road to an Orwellian police state of universal tracking, but actually reduced security, could well be paved with hundreds of millions of such NID cards.

    We have noted here before that technological solutions entail risks that should be identified and understood in advance of deployment to the greatest extent possible, regardless of any panic of the moment. The purported (yet unproven) ``benefits'' of an NID card system notwithstanding, these risks deserve to be discussed and understood in detail before any decisions regarding its adoption in any form should be made.

    Peter Neumann (neumann@pfir.org) and Lauren Weinstein (lauren@pfir.org) moderate the ACM RISKS Forum (www.risks.org) and the PRIVACY Forum (www.privacyforum.org), respectively. They are co-founders of People For Internet Responsibility: www.pfir.org

    NOTE: Over 5 years ago, Simon Davies quite rationally addressed many common questions relating to such ID cards. See his Frequently Asked Questions, August 24, 1996: http://www.privacy.org/pi/activities/idcard/idcard_faq.html. See also Chris Hibbert's FAQ on SSNs: http://cpsr.org/cpsr/privacy/ssn/ssn.faq.html. [NOTE: The above URL has been superceded by http://cpsr.org/prevsite/cpsr/privacy/ssn/index.html. If that does not work, search on "Chris Hibbert SSN FAQ".]

    ========================================================

    Inside Risks 137 CACM 44, 11, November 2001

    Risks of Panic

    Lauren Weinstein and Peter G. Neumann

    The horrific events of September 11, 2001, have brought grief, anger, fear, and many other emotions. As we write these words a few weeks later, risks issues are now squarely on the world's center stage, particularly technological risks relating to security and privacy.

    With the nightmare of recent events still in a haze of emotions, now is not the time to delve into the technical details of the many risks involved and their impacts on the overall issues of terrorism. We can only hope that future risks warnings will be given greater credence than has typically been the case in the past.

    We all want to prevent future attacks, and see terrorists brought to justice for their heinous actions. But this does not suggest that we should act precipitously without carefully contemplating the potential implications, especially when there has been little (if any) meaningful analysis of such decisions' real utility or effects.

    Calls for quick action abound, suggesting technical and non-technical approaches intended to impede future terrorism or to calm an otherwise panicky public. Below is a sampling of some current proposals (all in a state of flux and subject to change by the time you read this) that may have various degrees of appeal at the moment. However, not only is it highly questionable whether these ideas can achieve their ostensible goals, it's certainly true that all of them carry a high risk of significant and long-lasting deleterious effects on important aspects of our lives. While improvements in our intelligence and security systems are clearly needed, we should not even be considering the implementation of any of the items below without extremely careful consideration and soul-searching:

    * Increased use of wiretapping, without many existing legal restraints

    * Widespread monitoring of e-mail, URLs, and other Internet usage

    * Banning strong encryption without ``backdoors'' for government access. (In general, the existence of such backdoors creates a single point of attack likely to be exploitable by unauthorized as well as authorized entities, possibly increasing crime and terrorism risks instead of reducing them [1].)

    * Face and fingerprint identification systems

    * Arming of pilots; remote-controlled airliners; biometrically-locked airliner controls

    * Indefinite detention without trial

    * Life in prison without parole for various actions that proposals are broadly interpreting as ``terrorist'' (potentially including some security research, petty computer hacking, and other activities that clearly do not fall under currently established definitions of ``terrorism'')

    * National ID cards (such as smartcards or photographic IDs), which have only limited potential to enhance security but also entail an array of serious risks and other negative characteristics.

    * Massive interagency data sharing and loosened ``need to know'' restrictions on personal information related to areas such as social security numbers, drivers' license information, educational records, domestic and foreign intelligence data, etc. All such data can lead directly not only to identity theft but also to a wide range of other abuses.

    These and many other proposals are being made with little or no evidence that they would have prevented the events of September 11th, nor deter future highly adaptable terrorists. Some of these concepts, though their motives may often be laudable, could actually reduce the level of security and increase the risks of terrorist attacks. The details of these effects will be topics for much future discussion, but now is not the time for law-enforcement ``wish lists'' or knee-jerk reactions, including many ideas that have been soundly rejected in the past and which have no greater value, and no fewer risks, than they did prior to September 11th.

    We must not obliterate hard-won freedoms through hasty decisions. To do so would be to give the terrorists their ultimate victory.

    Our best wishes to you and yours.

    Lauren Weinstein (lauren@pfir.org) and Peter Neumann (neumann@pfir.org) moderate the PRIVACY Forum (www.privacyforum.org) and the ACM RISKS Forum (www.risks.org), respectively. They are co-founders of People For Internet Responsibility (www.pfir.org).

    1. Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, and Bruce Schneier, The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption, http://www.cdt.org/crypto/risks98/; reprinting an earlier article in the World Wide Web Journal, 2, 3, Summer 1997, with a new preface.

    2. J.J. Horning, P.G. Neumann, D.D. Redell, J. Goldman, D.R. Gordon, Computer Professionals for Social Responsibility, A Review of NCIC 2000 (report to the Subcommittee on Civil and Constitutional Rights of the Committee on the Judiciary, United States House of Representatives), February 1989, Palo Alto, California. (This reference discusses among other things some of the privacy and life-critical risks involved in monitoring and tracking within law enforcement.)

    3. Also see various Web sites for further background: http://www.acm.org, http://catless.ncl.ac.uk/Risks/ and http://www.privacyforum.org, http://www.pfir.org, http://www.epic.org, etc.

    ========================================================

    Inside Risks 136 CACM 44, 10, October 2001

    The Perils of Port 80

    Stephan Somogyi and Bruce Schneier

    In the months that the Code Red worm and its relatives have traveled the Net, they've caused considerable consternation among users of Microsoft's Internet Information Server, and elicited abundant Schadenfreude from unaffected onlookers. Despite the limited havoc that it wrought, the Code Red family highlights a much more pernicious problem: the vulnerability of embedded devices with IP addresses, particularly those with built-in Web servers.

    Thus far, the Code Red worms work their way through self-generated lists of IP addresses and contact each address's port 80, the standard HTTP port. If a server answers, the worm sends an HTTP request that forces a buffer overflow on unpatched IIS servers, compromising the entire computer.

    Any effect that these worms have on other devices that listen on port 80 appears to be unintended. Cisco has admitted that some of its DSL routers are susceptible to denial-of-service; when affected routers' embedded Web servers are contacted by Code Red, the router goes down. HP print servers and 3Com LANmodems seem to be similarly affected; other network-infrastructure hardware likely suffered, too.

    HTTP has become the computers' lingua franca of the Internet. Since Web browsers are effectively ubiquitous, many hardware and software companies can't resist making their products' functions visible -- and often controllable -- from any Web browser. Indeed, it almost seems as if all future devices on the Net will be listening on port 80. This increasing reliance on network-accessible gadgetry will return to haunt us; Code Red is only a harbinger.

    Sony cryptically announced in April that it would endow all future products with IP addresses; a technically implausible claim, but nonetheless a clear statement of intent. Car vendors are experimenting with wirelessly accessible cars that can be interrogated and controlled from a Web browser. The possibilities for nearly untraceable shenanigans perpetrated by the script kiddie next door after working out your car's password are endless. This problem won't be solved by encrypting the Web traffic between car and browser, either.

    The rise of HTTP as a communications common denominator comes from ease of use, for programmer and customer alike. All customers need is a Web browser and the device's IP address, and they're set. Creating a lightweight server is trivial for developers, especially since both in- and outbound HTTP data is text.

    Even more attractive, HTTP traffic is usually allowed through firewalls and other network traffic barriers. Numerous non-HTTP protocols are tunneled via HTTP in order to ease their passage.

    But HTTP isn't the miscreant. The problem is created by the companies that embed network servers into products without making them sufficiently robust. Bullet-proof design and implementation of software -- especially network software -- in embedded devices is no longer an engineering luxury. Customer expectation of reliability for turnkey gadgets is higher than that for PC-based systems. The successful infiltration of the Code Red worms well after the alarm was sounded is eloquent proof that getting it right the first time has become imperative.

    Given the ease of implementation and small code size of a lightweight Web server, it's particularly disturbing that such software isn't engineered with greater care. Common errors that cause vulnerabilities -- buffer overflows, poor handling of unexpected types and amounts of data -- are well understood. Unfortunately, features still seem to be valued more highly among manufacturers than reliability. Until that changes, Code Red and its ilk will continue unabated.

    One example of doing it right is the OpenBSD project, whose developers have audited its kernel source code since the mid-1990s, and have discovered numerous vulnerabilities, such as buffer overflows, before they were exploited. Such proactive manual scrutiny of code is labor intensive and requires great attention to detail, but its efficacy is irrefutable. OpenBSD's security track record -- no remotely exploitable vulnerabilities found in the past four years -- speaks for itself.

    Like sheep, companies and customers have been led along the path of least resistance by the duplicitous guide called convenience. HTTP is easy: easy to implement, easy to use, and easy to co-opt. With a little diligence and forethought, it is also easy to secure, as are other means of remote network access. HTTP wasn't originally designed to be all things to all applications, but its simplicity has made it an understandable favorite. But with this simplicity also comes the responsibility on the part of its implementors to make sure it's not abused.

    Stephan Somogyi , Bruce Schneier

    References:

    Advisories
    CERT advisories click click click click
    Cisco advisory click
    Microsoft advisory about Index Server ISAPI Extension buffer overflow click
    OpenBSD strlcat/strlcpy USENIX paper click
    Stanford Meta-compilation research click
    HotMail/FedEx compromise click
    AT&T blocks port 80 click
    Other affected devices
    3Com LANmodems click
    Xerox printers click
    Alcatel release (redacted) click

    Stephan Somogyi writes frequently -- and speaks occasionally -- on technology, business, design, and distilled spirits for paper and online publications worldwide.

    Bruce Schneier, CTO, Counterpane Internet Security, Inc. Ph: 408-777-3612 19050 Pruneridge Ave, Cupertino, CA 95014. Internet security newsletter: http://www.counterpane.com/crypto-gram.html

    ========================================================

    Inside Risks 135 CACM 44, 9, September 2001

    Web Cookies: Not Just a Privacy Risk

    Emil Sit and Kevin Fu

    Most people have heard about the risks of Web cookies in the context of user privacy. Advertisers such as DoubleClick use cookies to track users and deliver targeted advertising, drawing significant media attention [1]. But cookies are also used to authenticate users to personalized services, which is at least as risky as using cookies to track users.

    A cookie is a key/value pair sent to a browser by a Web server to capture the current state of a Web session. The browser automatically includes the cookie in subsequent requests. Servers can specify an expiration date for a cookie, but the browser is not guaranteed to discard the cookie. Because there are few restrictions on their contents, cookies are highly flexible and easily misused.

    Cookies have been used for tracking and authentication. An advertiser can track your movements between Web sites because the first banner-ad presented to you can set a cookie containing a unique identifier. As you read subsequent advertisements, the advertiser can construct a profile about you based on the cookies it receives from you. Cookies can also authenticate you for multistep Web transactions. For example, WSJ.com sets a cookie to identify you after you login. This allows you to download content from WSJ.com without having to re-enter a password. E-commerce sites like Amazon.com use cookies to associate you with a shopping cart. In all cases, a valid cookie will grant access to data about you, but the information protected by an authentication cookie is especially sensitive. Unlike tracking cookies, authentication cookies must be protected from both exposure and forgery.

    Unfortunately, cookies were not designed with these protections in mind. For example, there is no standard mechanism to establish the integrity of a cookie returned by a browser, so a server must provide its own method. As might be expected, some servers use much better methods than others. The cookie specification also relies heavily on the cooperation of the user and the browser for correct operation. Despite the lack of security in the design of cookies, their flexibility makes them highly attractive for authentication. This is especially true in comparison to mechanisms like HTTP Basic Authentication or SSL that have fixed requirements, are not extensible, and are confusing to users. Thus cookie-based authentication is very popular and often insecure, allowing anything from extension of privileges to the impersonation of users.

    Most sites do not use cryptography to prevent forgery of cookie-based authenticators. The unsafe practice of storing usernames or ID numbers in cookies illustrates this. In such a scheme, anyone can impersonate a user by substituting the victim's username or ID number in the cookie. Even schemes that do use cryptography often crumble under weak cryptanalytic attacks. Designing a secure cookie-based authentication mechanism is difficult because the cookie interface is not amenable to strong challenge-response protocols. Thus, many designers without clear security requirements invent weak, home-brew authentication schemes [2].

    Many sites also rely on cookie expiration to automatically terminate a login session. However, you can modify your cookies to extend expiration times. Further, most HTTP exchanges do not use SSL to protect against eavesdropping: anyone on the network between the two computers can overhear the traffic. Unless a server takes stronger precautions, an eavesdropper can steal and reuse a cookie, impersonating a user indefinitely.

    These examples illustrate just a few of the common problems with cookie-based authentication. Web site designers must bear these risks in mind, especially when designing privacy policies and implementing Web sites. Although there is currently no consensus on the best design practices for a cookie authentication scheme, we offer some guidance [2]. To protect against the exposure of your own personal data, your best (albeit extreme) defense is to avoid shopping online or registering with online services. Disabling cookies makes any use of cookies a conscious decision (you must re-enable cookies) and prevents any implicit data collection. Unfortunately, today's cookie technology offers no palatable solution for users to securely access personalized Web sites.

    1. Hal Berghel, "Digital Village: Caustic cookies," Communications of the ACM 44, 5, 19-22, May 2001.

    2. Kevin Fu, Emil Sit, Kendra Smith, Nick Feamster, ``Dos and Don'ts of Client Authentication on the Web'', Proc. of 10th USENIX Security Symposium, August 2001. [NOTE: This paper won the best student paper award! Also, see http://cookies.lcs.mit.edu/ for on-going research in cookie authentication: The Cookie Eaters: Cookie Collection Project. Neumann]

    Emil Sit (sit@mit.edu) and Kevin Fu (fubob@mit.edu) are graduate students at the MIT Laboratory for Computer Science in Cambridge, MA.

    ========================================================

    Inside Risks 134 CACM 44, 8, August 2001

    Risks in E-mail Security

    Albert Levi and Cetin Kaya Koc

    It is easy to create bogus electronic mail with someone else's e-mail name and address: SMTP servers don't check sender authenticity. S/MIME (Secure/Multipurpose Internet Mail Extensions) can help, as can digital signatures and globally-known trustworthy Certification Authorities (CA) that issue certificates. The recipient's mail software verifies the sender's certificate to find out his/her public-key, which is then used to verify e-mail signed by the sender. In order to trust the legitimacy of the e-mail signatures, the recipient must trust the CA's certificate-issuance procedures. There are 3 classes of certificates. The certificate classes and issuance procedures are more or less the same for all CA companies that directly issue certificates to individuals, e.g., Verisign, Globalsign, and Thawte.

    Class-1 certificates have online processes for enrollment application and certificate retrieval. There is no real identity check, and it is possible to use a bogus name -- but the PIN sent by e-mail to complete the application at least connects the applicant to an e-mail address.

    Class-2 certificates are more secure than class-1. CAs issue them after some online and offline controls. They automatically check applicant's identity and address against the database of a third party, such as a credit-card company or DMV. As Schneier and Ellison note in their column ``Risks of PKI: Secure E-Mail'' (*Comm. ACM 43,* 1, January 2000), it is possible to create fake certificates using this online method simply by private information theft. In order to reduce the likelihood of impersonation, CAs use a postal service for identity verification and/or confirmation.

    Class-3 certificates require in-person presence for strong identity control prior to issuance by CAs, so they are still more secure.

    As usually used in S/MIME, class-1 certificates can easily mislead users. The recipient's e-mail program verifies the signature over a signed message using the sender's class-1 certificate. Because the information in the e-mail message and in the certificate match, the e-mail client program would accept the signature as valid, but must take the sender's word. With a dishonest sender, the spurious verification is garbage-in, gospel-out. The only seeming assurance the signature gives is that the message might have been sent by a person who has access to the e-mail address specified in the message, but this fact isn't clearly specified by the e-mail programs. An average user thinks that a class-1 certificate provides identity verification, which is not true. This is neither a bug nor a one-time security flaw. It is exactly how the system works.

    CA companies are, of course, aware of this, and put appropriate disclaimers within their Certificate Practice Statements (CPS) and class-1 certificates. However, such disclaimers must be read and interpreted by the verifiers. Who would spend time reading these details when the e-mail program says that the message has been signed? The average Internet user isn't an experienced security technician.

    Some CA companies, like Globalsign, don't include the certificate holder's name in class-1 certificates. This is good approach, but not sufficient. A message signed by such a class-1 certificate would also be verified by the e-mail programs. People who don't read the disclaimers also won't read a lack-of-identification notice. Worse, this lets a sender use the same certificate to impersonate multiple persons.

    If you receive an e-mail message without a signature, you might be wary -- but are likely to take a signed message at face value. Class-1 certificates, in that respect, provide vulnerability in the name of security.

    The verifier should check the level of assurance given in a certificate. Perhaps e-mail programs should be designed to help verifiers by giving clear and direct warnings specifying the exact level of identity validation associated with the certificate. If a class-1 certificate is used, the program should display a box saying that sender's identity hasn't been validated.

    Certificate holders as well as verifiers must be aware of the fact that class-1 certificates don't certify real identities. They have to use class-3 certificates for this.

    Certificate classes were invented to serve the security-vs-convenience tradeoff. Class-3 certificates have a good level of identity check for personal authentication, but CA companies should still promote class-1 and class-2 certificates for the users who need the convenience of online processing. Refusing to provide them would lose the CAs too many customers. We believe that class-1 certificates will gradually disappear as certificate use reaches maturity and as people become more conscious of the limitations of class-1 certificates.

    Albert Levi (levi@ece.orst.edu) is a postdoctoral research associate at Information Security Lab, Oregon State University. Cetin Kaya Koc [with cedillas under C and c] (koc@ece.orst.edu) is a professor of Electrical & Computer Engineering at OSU.

    ========================================================

    Inside Risks 133 CACM 44, 7, July 2001

    Learning from Experience

    Jim Horning

    Despite a half-century of practice, a distressingly large portion of today's software is over budget, behind schedule, bloated, and buggy. As you know, all four factors generate risks, and bugs can be life-critical. Our reach continues to exceed our grasp. While hardware has grown following Moore's Law, software seems to be stuck with Gresham's Law. Most providers studiously avoid taking any responsibility for the software they produce.

    These observations are not new. They were eloquently presented at the famous 1968 NATO conference for which the term ``software engineering'' was coined. (It was ``deliberately chosen as being provocative, in implying the need for ... the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering.'') But many of today's programmers and managers were not even born in 1968, and most of them probably got their training after the conference proceedings (Software Engineering: Concepts and Techniques, P. Naur, B. Randell, and J.N. Buxton (eds.), Petrocelli/Charter, 1976) went out of print.

    For those who care about software, wonder why it's in such bad shape, and want to do something about it, I prescribe the study of both the current literature and the classics. It is not enough to learn from your own experience; you should learn from the experiences of others. ``Those who cannot remember the past are condemned to repeat it.'' (George Santayana)

    I have long recommended the book The Mythical Man-Month, by Frederick P. Brooks, Jr., Addison-Wesley, 2nd edition, July 1995. It is a product of both bitter experience (``It is a very humbling experience to make a multimillion-dollar mistake.'') and careful reflection on that experience. It distills much of what was learned about management in the first quarter-century of software development. This book has stayed continuously in print since 1975, with a new edition in 1995. It is still remarkably relevant to managing software development.

    Now there is another book I would put beside it as a useful source of time-tested advice. Software Fundamentals: Collected Papers by David L. Parnas, Daniel M. Hoffman and David M. Weiss (eds.), with a foreword by J. Bentley, Addison-Wesley, 2001 is more technical and less management-oriented, but equally thought-provoking. In one volume, it covers in depth many risks-oriented topics.

    Parnas has been writing seminal and provocative papers about software and its development for more than 30 years, based on original research, observation, and diligent efforts to put theory into practice, often in risky systems such as avionics and nuclear reactor control. Software Fundamentals collects 33 of these papers, selected for their enduring messages. It includes such classics as ``On the Criteria to Be Used in Decomposing Systems into Modules''; ``On the Design and Development of Program Families''; ``Designing Software for Ease of Extension and Contraction''; A Rational Design Process: How and Why to Fake It''; and ``Software Engineering: An Unconsummated Marriage''. It also has some lesser-known gems, such as ``Active Design Reviews: Principles and Practices'' and ``Software Aging''. Even if you remember these papers, it is worth refreshing your memory.

    The papers were written to stand alone. Each has a new introduction, discussing its historical and modern relevance. Thus, readers can browse the papers in just about any order, choosing those that catch their interest. However, this is a book where browsing can easily turn to serious study; the editors' arrangement provides an orderly sequence for reading.

    Whether browsing or studying this book, you'll be struck by how much of today's ``conventional wisdom'' about software was introduced (or championed very early) by Parnas. Equally surprising is the number of his good ideas that have still not made their way into current practice. Anyone who cares about software and risks should ask, Why?

    Parnas is never dull. You won't agree with everything he says, and he'd probably be disappointed if you did. Pick something he says with which you disagree (preferably something you think is ``obviously wrong''), and try to construct a convincing theoretical or practical counter-argument. You'll probably find it harder than you expect, and you'll almost surely learn something worthwhile when you discover the source of your disagreement. Then, pick one of Parnas's good ideas that isn't being used where you work, and try to figure out why it isn't. That could inspire you to write a new column.

    Jim Horning (Horning@acm.org) is Director of the Strategic Technologies and Architectural Research Laboratory (STAR Lab) of InterTrust Technologies Corporation. (He wrote introductions for two of the papers in the Parnas anthology, but doesn't get any royalties.) He started programming in 1959; his long-term interest is the mastery of complexity.

    ========================================================

    Inside Risks 132, CACM 44, 6, June 2001

    PKI: A Question of Trust and Value

    Richard Forno and William Feinbloom

    On March 22, 2001, Microsoft issued a Security Bulletin (MS01-017) alerting the Internet community that two digital certificates were issued in Microsoft's name by VeriSign (the largest Digital Certificate company) to an individual -- an impostor -- not associated with Microsoft. Instantaneously, VeriSign (a self-proclaimed "Internet Trust Company") and the entire concept of Public Key Infrastructure (PKI) and digital certificates -- an industry and service based on implicit trust -- became the focus of an incident seriously undermining its level of trustworthiness. This incident also challenges the overall value of digital certificates.

    In theory, certificates are worthwhile to both businesses and consumers by providing a measure of confidence regarding whom they are dealing with. For example, consumers entering a bricks-and-mortar business can look around at the condition of the store, the people working there, and the merchandise offered. As desired, they can research various business references to determine the reliability and legitimacy of the business. Depending on the findings, they decide whether or not to shop there. However, with an Internet-based business, there is no easy way to determine with whom one is considering doing business. The Internet business may be a familiar name (from the "real" business world) and an Internet consumer might take comfort from that and enter into an electronic relationship with that site. Without a means to transparently verify the identity of a given Website (through digital certificates), how will they really know with whom they are dealing?

    Recall the incident involving Microsoft. Potentially, the erroneously-issued certificates were worth a considerable amount of money should their holders have attempted to distribute digitally-signed software purporting to be legitimate products from Microsoft. In fact, these certificates were worth much more than the "authentic" certificates issued to Microsoft because (as mentioned earlier) end-users do not have the ability to independently verify that certificates are valid. Since users can't verify the validity of certificates ­ legitimate or otherwise -- the genuine Microsoft certificates are essentially worthless!

    In ``Risks of PKI: Secure E-Mail'' (Comm. ACM 43, 1, January 2000) [below], cryptanalysts Bruce Schneier and Carl Ellison note that certificates are an attractive business model with significant income potential, but that much of the public information regarding PKI's vaunted benefits is developed (and subsequently hawked) by the PKI vendors. Thus, they are skeptical of the usefulness and true security of certificates.

    As a result of how PKI is currently marketed and implemented, the only value of digital certificates today is for the PKI vendor who is paid real money when certificates are issued. For the concept of certificates to have real value for both purchaser and end-user, there must be real-time, every-time, confirmation that the presented certificate is valid, similar to how credit cards are authorized in retail stores. Unless a certificate can be verified during each and every use, its value and trustworthiness is significantly reduced.

    In the real world, when submitted for a purchase, credit cards are subjected to at least 6 steps of verification. The first is when the Point of Sale (POS) terminal contacts the credit-card issuer, who verifies that the POS terminal belongs to an authorized merchant. Then, when the customer's card information is transmitted, the issuer verifies that the card number is valid, is active (not revoked, or appearing on a list of stolen or canceled cards), and that the card balance (including the current purchase) is not over the approved limit. Finally the merchant, after receiving an approval for the transaction by the credit-card company, usually (but not always) verifies the customer's signature on the receipt matches the signature on the card. If there is no signature on the card, the merchant may ask for another form of signed identification, sometimes even asking for photo identification.

    The Schneier-Ellison article and recent real-world events demonstrate that a system of robust, mutual and automatic authentication, checks-and-balances, and active, ongoing cross-checks between all parties involved is necessary before PKI can be considered a secure or "trusted" concept of identification. Without such features, certificates simply become a few bits of data with absolutely no value to anyone but the PKI vendor.

    Without effective revisions to the current process of generating and authenticating new and existing certificate holders, the concept of PKI as a tool providing ``Internet trust'' will continue to be a whiz-bang media buzzword for the PKI industry, full of the sound and fury of marketing dollars, but, in reality, securing nothing.

    Note: See http://www.infowarrior.org/articles/2001-01.html for a more detailed discussion: A Matter of Trusting Trust: Why Current Public-Key Infrastructures are a House of Cards.

    ========================================================

    Inside Risks 131, CACM 44, 5, May 2001

    Be Seeing You!

    Lauren Weinstein

    You get up to the turnstile at a sporting event and learn that you won't be permitted inside unless you provide a blood sample for instant DNA analysis, so that you can be compared against a wanted criminal database. Thinking of that long overdue library book, you slink away rather than risk exposure.

    Farfetched? Sure, today. But tomorrow, a similar scenario could actually happen, except that you'll probably never even know that you're being scanned. True, overdue library books probably won't be a high priority, and we should all of course obey the rules, return those books, and pay any fines! But there's actually a range of extremely serious risks from the rapid rise of biometric and tracking technologies in a near void of laws and regulations controlling their use, and abuse.

    There was an outcry when it was revealed that patrons at the 2000 Superbowl game (some critics have dubbed it the "Snooper Bowl") were unknowingly scanned by a computerized system that tried matching their faces against those of wanted criminals, even though this sort technology has long been used in venues such as some casinos and ATM machines. The accuracy of these devices appears quite limited in most cases today, but they will get better. Video cameras are becoming ubiquitous in public, and the potential of these systems to provide the basis for detailed individual dossiers is significant and rapidly expanding.

    Other technologies will soon provide even better identification and tracking. We constantly shed skin and other materials that could be subjected to DNA matching; automated systems to vastly speed this process for immediate use are under development. Will "planting" someone else's DNA become the future's version of a criminal "frameup"? DNA concerns have already found their way into the popular media -- the 1997 film Gattaca postulated a nightmarish DNA-obsessed society. Even without biometrics, the ability for others to track our movements is growing with alarming speed. There will be wide use of cell-phone location data (which is generally available whenever your cell phone is on, even if not engaged in a call). The availability of this data (originally mandated by the FCC for laudable 911 purposes) is being rapidly explored by both government and commercial firms.

    It's often argued that there's no expectation of privacy in public places. But by analogy this suggests that it would be acceptable for every one of us to be followed around by a snoop with a notepad, who then provides his notes regarding our movements to the government and/or any commercial parties willing to pay his fees. As a society, would we put up with this? Should the fact that technology could allow such mass tracking to be done surreptiously somehow make it more acceptable?

    Proponents of these systems tend to concentrate on scenarios that most of us would agree are valuable, like catching child molesters and murderers, or finding a driver trapped in a blizzard. But the industry shows much less enthusiasm for possible restrictions to prevent the inappropriate or trivialized use of such data. An infrastructure that could potentially track the movements of its citizens, both in realtime and retrospectively via archived data, could become a powerful tool for oppression by some governments less enlightened than our own is today. Detailed automated monitoring of the citizenry could probably result in a dramatic reduction in all manner of infractions, from the most minor to the very serious. Such monitoring would also fundamentally alter our society in ways that most of us would find abhorrent.

    Even in current civil and commercial contexts the potential for abuse is very real. Lawyers in divorce cases would love to get hold of data detailing where that supposedly errant husband has been. Insurance companies could well profit from knowledge about where their customers go and what sorts of potentially risky activities they enjoy. Such data in the wrong hands could help enable identity fraud, or far worse.

    We've already seen automated toll collection records (which tend to be kept long after they're needed for their original purpose) drawn into legal battles concerning persons' whereabouts. Cell-phone location information (even when initially collected with the user's consent in some contexts) can become fodder for all manner of commercial resale, data-matching, and long-term archival efforts, with few (if any) significant restrictions on such applications or how the data collected can be later exploited.

    It would be wrong to fault technology itself for introducing this array of risks to privacy. The guilt lies with our willingness to allow technological developments (and the vested interests behind them in many cases) to skew major aspects of our society without appropriate consideration being given to society's larger goals and needs. If we're unwilling to tackle that battle, we'll indeed get what we deserve.

    Lauren Weinstein (lauren@vortex.com) moderates the PRIVACY Forum (http://www.vortex.com/privacy). He also co-founded People For Internet Responsibility (http://www.pfir.org).

    ========================================================

    Inside Risks 130, CACM 44, 4, April 2001

    Cyber Underwriters Lab?

    Bruce Schneier

    Underwriters Laboratories (UL) is an independent testing organization created in 1893, when William Henry Merrill was called in to find out why the Palace of Electricity at the Columbian Exposition in Chicago kept catching on fire (which is not the best way to tout the wonders of electricity). After making the exhibit safe, he realized he had a business model on his hands. Eventually, if your electrical equipment wasn't UL certified, you couldn't get insurance.

    Today, UL rates all kinds of equipment, not just electrical. Safes, for example, are rated based on time to crack and strength of materials. A ``TL-15'' rating means that the safe is secure against a burglar who is limited to safecracking tools and 15 minutes' working time. These ratings are not theoretical; employed by UL, actual hotshot safecrackers take actual safes and test them. Applying this sort of thinking to computer networks -- firewalls, operating systems, Web servers -- is a natural idea. And the newly formed Center for Internet Security (no relation to UL) plans to implement it.

    This is not a good idea, not now, and possibly not ever. First, network security is too much of a moving target. Safes are easy; safecracking tools don't change much. Not so with the Internet. There are always new vulnerabilities, new attacks, new countermeasures; any rating is likely to become obsolete within months, if not weeks.

    Second, network security is much too hard to test. Modern software is obscenely complex: there is an enormous number of features, configurations, implementations. And then there are interactions between different products, different vendors, and different networks. Testing any reasonably sized software product would cost millions of dollars, and wouldn't guarantee anything at the end. Testing is inherently incomplete. And if you updated the product, you'd have to test it all over again.

    Third, how would we make security ratings meaningful? Intuitively, I know what it means to have a safe rated at 30 minutes and another rated at an hour. But computer attacks don't take time in the same way that safecracking does. The Center for Internet Security talks about a rating from 1 to 10. What does a 9 mean? What does a 3 mean? How can ratings be anything other than binary: either there is a vulnerability or there isn't?

    The moving-target problem particularly exacerbates this issue. Imagine a server with a 10 rating; there are no known weaknesses. Someone publishes a single vulnerability that allows an attacker to easily break in. Once a sophisticated attack has been discovered, the effort to replicate it is effectively zero. What is the server's rating then? 9? 1? How does the Center re-rate the server once it is updated? How are users notified of new ratings? Do different patch levels have different ratings?

    Fourth, how should a rating address context? Network components would be certified in isolation, but deployed in a complex interacting environment. Ratings cannot take into account all possible operating environments and interactions. It is common to have several individual ``secure'' components completely fail a security requirement when they are forced to interact with one another.

    And fifth, how does this concept combine with security practices? Today the biggest problem with firewalls is not how they're built, but how they're configured. How does a security rating take that into account, along with other people problems: users naively executing e-mail attachments, or resetting passwords when a stranger calls and asks them to?

    This is not to say that there's no hope. Eventually, the insurance industry will drive network security, and then some sort of independent testing is inevitable. But providing a rating, or a seal of approval, doesn't have any meaning right now.

    Ideas like this are part of the Citadel model of security, as opposed to the Insurance model. The Citadel model basically says, ``If you have this stuff and do these things, then you'll be safe.'' The Insurance model says, ``Inevitably things will go wrong, so you need to plan for what happens when they do.'' In theory, the Citadel model is a much better model than the pessimistic, fatalistic Insurance model. But in practice, no one has ever built a citadel that is both functional and dependable.

    The Center for Internet Security has the potential to become yet another ``extort-a-standard'' body, which charges companies for a seal of approval. This is not to disparage the motives of those behind the Center; you can be an ethical extortionist with completely honorable intentions. What makes it extortion is the decrement from not paying. If you don't have the ``Security Seal of Approval'', then (tsk, tsk) you're just not concerned about security.

    Bruce Schneier, CTO of Counterpane Internet Security, Inc. (a managed-security monitoring firm), 3031 Tisch Way, 100 Plaza East, San Jose, CA 95128, 1-408-556-2401, is author of Secrets and Lies: Digital Security in a Networked World (Wiley, 2000). http://www.counterpane.com. Bruce also writes a monthly Crypto-Gram newsletter http://www.counterpane.com/crypto-gram.html.

    ========================================================

    Inside Risks 129, CACM 44, 3, March 2001

    Computers: Boon or Bane?

    Peter G. Neumann and David L. Parnas

    Predicting the long-term effects of computers is both difficult and easy: we won't get it right, but we won't see ourselves proven wrong. Rather than try, we present some alternatives allowing readers to make their own predictions.

    * Computers play an increasing role in enabling and mediating communication between people. They have great potential for improving communication, but there is a real risk that they will simply overload us, keeping us from really communicating. We already receive far more information than we can process. A lot of it is noise. Will computers help us to communicate or will they interfere?

    * Computers play an ever increasing role in our efforts to educate our young. Many countries want to have computers in every school, or even one on every desk. Computers can help in certain kinds of learning, but it takes time to learn the arcane set of conventions that govern their use. Even worse, many children become so immersed in the cartoon world created by computers that they accept it as real, losing interest in other things. Will computers really improve our education, or will children be consumed by them?

    * Computers play an ever increasing role in our war-fighting. Most modern weapons systems depend on computers. Computers also play a central role in military planning and exercises. Perhaps computers will eventually do the fighting and protect human beings. We might even hope that wars would be fought with simulators, not weapons. On the other hand, computers in weapon systems might simply make us more efficient at killing each other and impoverishing ourselves. Will computers result in more slaughter or a safer world?

    * Information processing can help to create and preserve a healthy environment. Computers can help to reduce the energy and resources we expend on such things as transportation and manufacturing, as well as improve the efficiency of buildings and engines. However, they also use energy and their production and disposal create pollution. They seem to inspire increased consumption, creating what some ancient Chinese philosophers called ``artificial desires''. Will computers eventually improve our environment or make it less healthy?

    * By providing us with computational power and good information, computers have the potential to help us think more effectively. On the other hand, bad information can mislead us, irrelevant information can distract us, and intellectual crutches can cripple our reasoning ability. We may find it easier to surf the web than to think. Will computers ultimately enhance or reduce our ability to make good decisions?

    * Throughout history, we have tried to eliminate artificial and unneeded distinctions among people. We have begun to learn that we all have much in common -- men and women, black and white, Russians and Americans, Serbs and Croats, .... Computers have the power to make borders irrelevant, to hide surface differences, and to help us to overcome long-standing prejudices. However, they also encourage the creation of isolated, antisocial, groups that may, for example, spread hatred over networks. Will computers ultimately improve our understanding of other peoples or lead to more misunderstanding and hatred?

    * Computers can help us to grow more food, build more houses, invent better medicines, and satisfy other basic human needs. They can also distract us from our real needs and make us hunger for more computers and more technology, which we then produce at the expense of more essential commodities. Will computers ultimately enrich us or leave us poorer?

    * Computers can be used in potentially dangerous systems to make them safer. They can monitor motorists, nuclear plants, and aircraft. They can control medical devices and machinery. Because they don't fatigue and are usually vigilant, they can make our world safer. On the other hand, the software that controls these systems is notoriously untrustworthy. Bugs are not the exception; they are the norm. Will computers ultimately make us safer or increase our level of risk?

    Most of us are so busy advancing and applying technology that we don't look either back or forward. We should look back to recognize what we have learned about computer-related risks. We must look forward to anticipate the future effects of our efforts, including unanticipated combinations of apparently harmless phenomena. Evidence over the past decade of Inside Risks and other sources suggests that we are not responding adequately to that challenge. Humans have repeatedly demonstrated our predilection for short-term optimization without regard for long-term costs. We must strive to make sure that we maximize the benefits and minimize the harm. Among other things, we must build stronger and more robust computer systems while remaining acutely aware of the risks associated with their use.

    Professor David Lorge Parnas, P.Eng., is Director of the Software Engineering Programme, Department of Computing and Software, Faculty of Engineering McMaster University, Hamilton, Ontario, Canada L8S 4L7. (PGN is PGN.)

    ========================================================

    Inside Risks 128, CACM 44, 2, February 2001

    What To Know About Risks

    Peter G. Neumann

    In this column, we assert that deeper knowledge of fundamental principles of computer technology and their implications will be increasingly essential in the future, for a wide spectrum of individuals and groups, each with its own particular needs. Our lives are becoming ever more dependent on understanding computer-related systems and the risks involved. Although this may sound like a motherhood statement, wise implementation of motherhood is decidedly nontrivial -- especially with regard to risks.

    Computer scientists who are active in creating the groundwork for the future need to understand system issues in the large, including the practical limitations of theoretical approaches. System designers and developers need broader and deeper knowledge -- including those people responsible for the human interfaces that must be used in inherently riskful operational environments that must be trusted; interface design is often critical. Particularly in those systems that are not wisely conceived and implemented, operators and users of the resulting systems also need an understanding of certain fundamentals. Corporation executives need an understanding of various risks and countermeasures. In each case, our knowledge must increase dramatically over time, to reflect rapid evolution. Fortunately, the fundamentals do not change as quickly as the widget of the day, which suggests an important emphasis for education and ongoing training.

    An alternative view suggests that many technologies can be largely hidden from view, and that people need not understand (or indeed, might prefer not to know) the inner workings. David Parnas's early papers on abstraction, encapsulation, and information hiding are important in this regard. Although masking complexity is certainly possible in theory, in practice we have seen too many occasions (for examples, see the RISKS archives) in which inadequate understanding of the exceptional cases resulted in disasters. The complexities arising in handling exceptions apply ubiquitously, to defense, medical systems, transportation systems, personal finance, security, to our dependence on critical infrastructures that can fail -- and to anticipating the effects of such exceptions in advance.

    The importance of understanding the idiosyncrasies of mechanisms and human interfaces, and indeed the entire process, is illustrated by the 2000 Presidential election -- with respect to hanging chad, dimpled chad (due to stuffed chad slots), butterfly ballot layouts, inherent statistical realities, and the human procedures underlying voter registration and balloting. Clearly, the election process is problematic, including the technology and the surrounding administration that must be considered as part of the overall system. Looking into the future, a new educational problem will arise if preferential balloting becomes more widely adopted, whereby preferences for competing candidates are prioritized and the votes for the lowest-vote candidate are iteratively reallocated according to the specified priorities. This concept has many merits, although it would certainly further complicate ballot layouts!

    Thus, computer-related education is vital for everyone. The meaning of the Latin word ``educere'' (to educate) is literally ``to lead forth''. However, in general, many people do not have an adequate perception of the risks and their potential implications. When, for example, the media tell us that air travel is safer than automobile travel (on a passenger-mile basis, perhaps), the comparison may be less important than the concept that both could be significantly improved. When we are told that electronic commerce is secure and reliable, we need to recognize the cases in which it isn't.

    With considerable foresight and wisdom, Vint Cerf has repeatedly said that ``The Internet is for Everyone.'' The Internet can provide a fertile medium for learning for anyone who wants to learn, but it also creates serious opportunities for the unchecked perpetuation of misinformation and counterproductive learning that should eventually be unlearned.

    In general, we learn what is most valuable to us from personal experience, not by being force-fed lowest-common denominator details. In that spirit, it is important that education, training, and practical experiences provide motivations for true learning. For technologists, education needs to have a pervasive systems orientation that encompasses concepts of software and system engineering, security, and reliability, as well stressing the importance of suitable human interfaces. For everyone else, there needs to be much better appreciation of the sociotechnical and economic implications -- including the risks issues. Above all, a sense of vision of the bigger picture is what is most needed.

    For previous columns in this space relating to education, see February 1996 (research), August 1998 (computer science and software engineering), and October 1998 (risks of E-education), the first two by Peter Denning, the first and third by PGN. PGN's open notes for a Fall 1999 University of Maryland course on survivable, secure, reliable systems and networks, and a supporting report are on his Web site: http://www.CSL.sri.com/neumann

    ========================================================

    Inside Risks 127, CACM 44, 1, January 2001

    System Integrity Revisited

    Rebecca T. Mercuri and Peter G. Neumann

    Consider a computer product specification with data input, tabulation, reporting, and audit capabilities. The read error must not exceed one in a million, although the input device is allowed to reject any data that it considers to be marginal. Although the system is intended for use in secure applications, only functional (black box) acceptance testing has been performed, and the system does not conform to even the most minimal security criteria.

    In addition, the user interface (which changes periodically) is designed without ergonomic considerations. Input error rates are typically around 2%, although experience has indicated errors in excess of 10% under certain conditions. This is not considered problematic because errors are thought to be distributed evenly throughout the data. The interface provides essentially no user feedback as to the content of input selections or to the correctness of the inputs, even though variation from the proper input sequence will void the user data.

    Furthermore, multiple reads of the same user data set often produce different results, due to storage media problems. The media contain a physical audit trail of user activity that can be manually perused. There is an expectation that this audit trail should provide full recoverability for all data in order to include information lost through user error. (In practice, the audit trail is often disregarded, even when the user error rate could yield a significant difference in the reported results.)

    We have just described the balloting systems used by over a third of the voters in the United States. For decades, voters have been required to use inherently flawed punched-card systems, which are misrepresented as providing 100% accuracy (``every vote counts'') -- even though this assertion is widely known to be patently untrue. Lest you think that other voting approaches are better, mark-sense systems suffer from many of the same problems described above. Lever-style voting machines offer more security, auditability, and a significantly better user interface, but these devices have other drawbacks -- including the fact that no new ones have been manufactured for decades.

    Erroneous claims and product failures leading to losses are the basis of many liability suits, yet (up to now) candidates have been dissuaded from contesting election results through the legal system. Those who have lost their vote through faulty equipment also have little or no recourse; there is no recognized monetary or other value for the right of suffrage in any democracy. With consumer product failures, many avenues such as recalls and class action suits are available to ameliorate the situation -- but these are not presently applicable to the voting process. As recent events have demonstrated, the right to a properly counted private vote is an ideal rather than a guarantee.

    The foreseeable future holds little promise for accurate and secure elections. Earlier columns here [November 1990, 1992, 1993, 2000, and June 2000] and Rebecca Mercuri's doctoral thesis (http://www.notablesoftware.com/evote.html) describe a multitude of problems with direct electronic balloting (where audit trails provide no more security than the fox guarding the henhouse) and Internet voting (which facilitates tampering by anyone on the planet, places trust in the hands of an insider electronic elite, and increases the likelihood of privacy violations). Flawed though they may be, the paper-based and lever methods at least provide a visible auditing mechanism that is absent in fully automated systems.

    In their rush to prevent ``another Florida'' in their own jurisdictions, many legislators and election officials mistakenly believe that more computerization offers the solution. All voting products are vulnerable due to the adversarial nature of the election process, in addition to technical, social, and sociotechnical risks common to all secure systems. Proposals for universal voting machines fail to address the sheer impossibility of creating an ubiquitous system that could conform with each of the varying and often conflicting election laws of the individual states. Paper-based systems are not totally bad; some simple fixes (such as printing the candidates' names directly on the ballot and automated validity checks before ballot deposit) could go a long way in reducing user error and improving auditability.

    As the saying goes, ``Those who cannot remember the past are condemned to repeat it.'' If the computer science community remains mute and allows unauditable and insecure voting systems to be procured by our communities, then we abdicate what may be our only opportunity to ensure the democratic process in elections. Government officials need your help in understanding the serious risks inherent in computer-related election systems. Now is the time for all good computer scientists to come to the aid of the election process.

    Contact us at mercuri@acm.org and pneumann@acm.org.

    ========================================================

    Inside Risks 126, CACM 43, 12, December 2000

    Semantic Network Attacks

    Bruce Schneier

    On August 25, 2000, Internet Wire received a forged e-mail press release seemingly from Emulex Corp., saying that the Emulex CEO had resigned and the company's earnings would be restated. Internet Wire posted the message, without verifying either its origin or contents. Several financial news services and Web sites further distributed the false information, and the stock dropped 61% (from $113 to $43) before the hoax was exposed.

    This was a devastating network attack. Despite its amateurish execution (the perpetrator, trying to make money on the stock movements, was caught in less than 24 hours), $2.54 billion in market capitalization disappeared, only to reappear hours later. With better planning, a similar attack could do more damage and be more difficult to detect. It's an illustration of what I see as the third wave of network attacks -- which will be much more serious and harder to defend against than the first two waves.

    The first wave is physical: attacks against computers, wires, and electronics. As defenses, distributed protocols reduce the dependency on any one computer, and redundancy removes single points of failure. Although physical outages have caused problems (power, data, etc.), these are problems we basically know how to solve.

    The second wave of attacks is syntactic, attacking vulnerabilities in software products, problems with cryptographic algorithms and protocols, and denial-of-service vulnerabilities -- dominating recent security alerts. We have a bad track record in protecting against syntactic attacks, as noted in previous columns here. At least we know what the problem is.

    The third wave of network attacks is semantic, targetting the way we assign meaning to content. In our society, people tend to believe what they read. How often have you needed the answer to a question and searched for it on the Web? How often have you taken the time to corroborate the veracity of that information, by examining the credentials of the site, finding alternate opinions, and so on? Even if you did, how often do you think writers make things up, blindly accept ``facts'' from other writers, or make mistakes in translation? On the political scene, we've seen many examples of false information being reported, getting amplified by other reporters, and eventually being believed as true. Someone with malicious intent can do the same thing.

    People already take advantage of others' naivete. Many old scams have been adapted to e-mail and the Web. Unscrupulous stockbrokers use the Internet to fuel ``pump and dump'' strategies. On September 6, the Securities and Exchange Commission charged 33 companies and individuals with Internet fraud, many based on semantic attacks such as posting false information on message boards. However, changing old information can also have serious consequences. I don't know of any instance of someone breaking into a newspaper's article database and rewriting history, but I don't know of any newspaper that checks, either.

    Against computers, semantic attacks become even more serious. Computer processes are rigid in the type of inputs they accept -- and generally much less than a human making the same decision would see. Falsifying computer input can be much more far-reaching, simply because the computer cannot demand all the corroborating input that people have instinctively come to rely on. Indeed, computers are often incapable of deciding what the ``corroborating input'' would be, or how to go about using it in any meaningful way. Despite what you see in movies, real-world software is incredibly primitive when it comes to what we call ``simple common sense.'' For example, consider how incredibly stupid most Web filtering software is at deriving meaning from human-targeted content.

    Can air-traffic control systems, process-control computers, and ``smart'' cars on ``smart'' highways be fooled by bogus inputs? You once had to buy piles of books to fake your way onto The New York Times best-seller list; it's a lot easier to just change a few numbers in booksellers' databases. What about a successful semantic attack against the NASDAQ or Dow Jones databases? The people who lost the most in the Emulex hoax were the ones with preprogrammed sell orders.

    None of these attacks is new; people have long been the victims of bad statistics, urban legends, hoaxes, gullibility, and stupidity. Computer networks make it easier to start attacks and speed their dissemination, or for anonymous individuals to reach vast numbers of people at almost no cost.

    In the future, I predict that semantic attacks will be more serious than physical and syntactic attacks. It's not enough to dismiss them with the cryptographic magic wands of digital signatures, authentication, and integrity. Semantic attacks directly target the human/computer interface, the most insecure interface on the Internet. Amateurs tend to attack machines, whereas professionals target people. Any solutions will have to target the people problem, not the math problem.

    Bruce Schneier is CTO of Counterpane Internet Security, Inc. References are included in the archival version of this article at http://www.csl.sri.com/neumann/insiderisks.html.

    NOTE: The conceptualization of physical, syntactic, and semantic attacks is from an essay by Martin Libicki on the future of warfare. http://www.ndu.edu/ndu/inss/macnair/mcnair28/m028cont.html

    PFIR Statement on Internet hoaxes: http://www.pfir.org/statements/hoaxes

    Swedish Lemon Angels recipe: http://www.rkey.demon.co.uk/Lemon_Angels.htm A version of it hidden among normal recipes (I didn't do it, honest): http://www.cookinclub.com/cookbook/desserts/zestlem.html Mediocre photos of people making them (note the gunk all over the counter by the end): http://students.washington.edu/aferrel/pnt/lemangl.html

    SatireWire: How to Spot a Fake Press Release http://www.satirewire.com/features/fake_press_release.shtml

    Amazingly stupid results from Web content filtering software: http://dfn.org/Alerts/contest.htm

    See also: Bruce Schneier, Secrets and Lies: Digital Security in a Networked World, Wiley, 2000.

    Bruce Schneier, CTO, Counterpane Internet Security, Inc., 3031 Tisch Way, 100 Plaza East, San Jose, CA 95128. Phone: 1-408-556-2401, Fax: 1-408-556-0889. Free Internet security newsletter. See: http://www.counterpane.com

    ========================================================

    Inside Risks 125, CACM 43, 11, November 2000

    Voting Automation (Early and Often?)

    Rebecca Mercuri

    Computerization of manual processes often creates opportunities for social risks, despite decades of experience. This is clear to everyone who has waded through deeply nested telephone menus and then been disconnected. Electronic voting is an area where automation seems highly desirable but fails to offer significant improvements over existing systems, as illustrated by the following examples.

    Back in 1992, when I wrote here [1] about computerized vote tabulation, a $60M election system intended for purchase by New York City had come under scrutiny. Although the system had been custom designed to meet the City's stringent and extensive criteria, numerous major flaws (particularly those related to secure operations) were noted during acceptance testing and review by independent examiners. The City withheld its final purchase approval and legal wranglings ensued. This summer, the contract was finally cancelled, with the City agreeing to pay for equipment and services they had received; all lawsuits were dropped, thus ending a long and costly process without replacing the City's bulky arsenal of mechanical lever machines.

    Given NYC's lack of success in obtaining a secure, accurate, reliable voting system, built from the ground up, operating in a closed network environment, despite considerable time, resources, expertise and expenditures, it might seem preposterous to propose the creation of a system that would enable ``the casting of a secure and secret electronic ballot transmitted to election officials using the Internet'' [2]. Internet security features are largely add-ons (firewalls, encryption), and problems are numerous (denial-of-service attacks, spoofing, monitoring). (See [3,4].) Yet this does not seem to dissuade well-intentioned officials from promoting the belief that on-line voting is around the corner, and that it will resolve a wide range of problems from low voter turnout to access for the disabled.

    The recent California Task Force report suggested I-voting could be helpful to ``the occasional voter who neglects to participate due to a busy schedule and tight time constraints'' [2]. Its convenient access promise is vacuous, in that the described authorization process requires pre-election submission of a signed I-voting request, and subsequent receipt of a password, instructions, and access software on CD-ROM. Clearly, it would be far easier to mail out a conventional absentee ballot that could be quickly marked and returned, rather than requiring each voter to reboot a computer in order to install ``a clean, uncorrupted operating system and/or a clean Internet browser'' [2].

    Countless I-voting dotcoms have materialized recently, each hoping to land lucrative contracts in various aspects of election automation. Purportedly an academic project at Rensselaer Polytech, voteauction.com was shut down following threats of legal action for violating New York State election laws [5]. It has since been sold and reopened from an off-shore location where prosecution may be circumventable. Vote-selling combined with Internet balloting provides a powerful way to throw an election to the highest bidder, but this is probably not what election boards have in mind for their modernized systems. The tried-and-true method of showing up to vote where your neighbors can verify your existence is still best used at least until biometric identification is reliable and commonplace.

    While jurisdictions rush to obtain new voting systems, protective laws have lagged behind. Neither the Federal Election Commission nor any State agencies have required that computerized election equipment and software comply with existing government standards for secure systems. The best of these, the ISO Common Criteria, addresses matters important to voting such as privacy and anonymity; although it fails to delineate areas in which satisfaction of some requirements would preclude implementation of others, its components should not be ignored by those who are establishing minimum certification benchmarks [6].

    Computerization of electronic voting systems can have costly consequences, not only in time and money, but also in the much grander sense of further eroding confidence in the democratic process. ``If it ain't broke, don't fix it'' might be a Luddite battle cry, but it may also be prudent where the benefits of automation are still outweighed by the risks.

    1. R. Mercuri, ``Voting-machine risks,'' CACM 35, 11, November 1992.
    2. California Internet Voting Task Force, ``A report on the feasibility of Internet voting,'' January 2000. http://www.ss.ca.gov/executive/ivote/home.htm
    3. L. Weinstein, ``Risks of Internet voting,'' CACM 43, 6, June 2000.
    4. M.A. Blaze and S.M. Bellovin, ``Tapping on my network door,'' CACM 43, 10, October 2000.
    5. M.K. Anderson, ``Close vote? You can bid on it,'' August 17, 2000, and ``Voteauction bids the dust,'' August 22, 2000, Wired News.
    6. This is discussed at length in my Ph.D. Dissertation (see www.notablesoftware.com/evote.html).
    Rebecca Mercuri (mercuri@acm.org) has defended her doctoral thesis on this subject at the University of Pennsylvania on 27 October 2000. She is a member of the Computer Science faculty at Bryn Mawr College, and an expert witness in forensic computing.

    [Added note: See also ``Corrupted Polling'', Inside Risks 41, CACM 36, 11, November 1993, and Voting-Machine Risks, Inside Risks 29, CACM 35, 11, November 1992, which I have added to the end of this partial collection in the light of recent election considerations. PGN]

    ========================================================

    Inside Risks 124, CACM 43, 10, October 2000

    Tapping On My Network Door

    Matt Blaze and Steven M. Bellovin

    Readers of this column are familiar with the risks of illegal monitoring of Internet traffic. Less familiar, but perhaps just as serious, are the risks introduced when law enforcement taps that same traffic legally.

    Ironically, as insecure as the Internet may be in general, monitoring a particular user's traffic as part of a legal wiretap isn't so simple, with failure modes that can be surprisingly serious. Packets from one user are quickly mixed in with those of others; even the closest thing the Internet has to a telephone number --- the ``IP address'' --- often changes from one session to the next and is generally not authenticated. An Internet wiretap by its nature involves complex software that must reliably capture and reassemble the suspect's packets from a stream shared with many other users. Sometimes an Internet Service Provider (ISP) is able to provide a properly filtered traffic stream; more often, there is no mechanism available to separate out the targeted packets.

    Enter Carnivore. If an ISP can't provide exactly the traffic covered by some court order, the FBI offers its own packet sniffer, a PC running special software designed especially for wiretap interception. The Carnivore computer (so named, according to press reports, for its ability to ``get to the meat'' of the traffic) is connected to the ISP's network segment expected to carry the target's traffic. A dial-up link allows FBI agents to control and configure the system remotely.

    Needless to say, any wiretapping system (whether supplied by an ISP or the FBI) relied upon to extract legal evidence from a shared, public network link must be audited for correctness and must employ strong safeguards against failure and abuse. The stringent requirements for accuracy and operational robustness provide especially fertile ground for many familiar risks.

    First, there is the problem of extracting exactly (no more and no less) the intended traffic. Standard network monitoring techniques provide only an approximation of what was actually sent or received by any particular computer. For wiretaps, the results could be quite misleading. If a single packet is dropped, repeated, or miscategorized (common occurrences in practice), an intercepted message could be dramatically misinterpreted. Nor is it always clear ``who said what.'' Dynamic IP addresses make it necessary to capture and interpret accurately not only user traffic, but also the messages that identify the address currently in use by the target. Furthermore, it is frequently possible for a third party to alter, forge, or misroute packets before they reach the monitoring point; this usually cannot be detected by the monitor. Correctly reconstructing higher-level transactions, such as electronic mail, adds still more problems.

    The general-purpose nature of Carnivore entails its own risks. ISPs vary greatly in their architecture and configuration; a new component that works correctly in one might fail badly --- silently or destructively --- in another. Carnivore's remote control features are of special concern, given the potential for damage should a criminal gain control of an installed system. ISPs are understandably reluctant to allow such devices to be installed deep within their infrastructures.

    Complicating matters further are the various kinds of authorized wiretaps, with different legal standards for each. Because Carnivore is a general purpose ``black box,'' an ISP (or a court) cannot independently verify that any particular installation has been configured to collect only the traffic for which it is legally authorized.

    Internet wiretaps raise many difficult questions, both legal and technical. The legal issues are being debated in Congress, in the courts, and in the press. The technical issues include the familiar (and tough) problems of software correctness, complex system robustness, user interfaces, audit, accountability, and security.

    Unfortunately, there's no systematic way to be sure that any system as complex and sensitive as Carnivore works as it is supposed to. A first step, the best our community has yet found for this situation, is to subject the source code and system details to wide scrutiny. Focused reviews by outside experts should be part of this process, as should opening the code to the public. While the details of particular wiretaps may properly be kept secret, there's no reason for the wiretapping mechanism to be concealed. The observation that sunshine is the best disinfectant applies at least as well to software as it does to government.

    Even if we could guarantee the correctness of software, difficult systems issues still remain. Software alone cannot ensure that the reviewed code is what is actually used, that filters and configuration files match court orders, that evidence is not tampered with, and so on.

    Ultimately, it comes down to trust --- of those who operate and control the system and of the software itself. Trusting a law enforcement agent to be honest and faithful to duty in a free society is one thing. Trusting complex, black-box software to be correct and operationally faithful to specifications, however, is quite another.

    Matt Blaze and Steven M. Bellovin are researchers at AT&T Labs in Florham Park, NJ. This column is also at http://www.crypto.com/papers/carnivore-risks.html along with other background information in the /papers directory.

    ========================================================

    Inside Risks 123, CACM 43, 9, September 2000

    Missile Defense

    Peter G. Neumann

    For evaluating the proposed U.S. national missile-defense shield, President Clinton has outlined four criteria relating to strategic value, technological and operational feasibility, cost, and impact on international stability. Strategic value is difficult to assess without considering the feasibility; if the desired results are technologically infeasible, then the strategic value may be minimal. Feasibility remains an open question, in the light of recent test difficulties and six successive failures in precursor tests of the Army's Theater High-Altitude Area Defense (THAAD), as well as intrinsic difficulties in dealing with system complexity. The cost is currently estimated at $60 billion, but how can any such estimate be realistic with so many unknowns? The impact on international stability also remains an open question, with considerable discussion domestically and internationally.

    We consider here primarily technological feasibility. One issue of great concern involves the relative roles of offense and defense, particularly the ability of the defense to differentiate between real missiles and intelligent decoys. The failed July 2000 experiment ($100 million) had only one decoy; it was an order of magnitude brighter than the real missile, to give the computer analysis a better chance of discriminating between one decoy and the one desired target. (The test failed because the second stage of the defensive missile never deployed properly; the decoy also failed to deploy. Thus, the goal of target discrimination could not be assessed.)

    Theodore Postol of MIT has pointed out this was a very simplistic test. Realistically, decoy technology is orders of magnitude cheaper than discrimination technology. It is likely to defeat a defensive system that makes assumptions about the specific attack types and decoys that might be deployed, because those assumptions will surely be incomplete and perhaps incorrect.

    Furthermore, the testing process is always inconclusive. Complex systems fail with remarkably high probability, even under controlled conditions and even if all subsystems work adequately in isolated tests. In Edsger Dijkstra's words, ``Testing can be used to show the presence of bugs, but never to show their absence.''

    David Parnas's 1985 arguments [1] relative to President Reagan's Strategic Defense Initiative (SDI) are all still valid in the present context, and deserve to be revisited:
    1. Why software is unreliable.
    2. Why SDI would be unreliable.
    3. Why conventional software development does not produce reliable programs.
    4. The limits of software engineering methods.
    5. Artificial intelligence and the SDI.
    6. Can automatic programming solve the SDI software problem?
    7. Can program verification make the SDI software reliable?
    8. Is the SDI Office an efficient way to fund worthwhile research?

    Risks in the software development process seem to have gotten worse since 1985. (See our July 2000 column.) Many complex system developments have failed. Even when systems have emerged from the development process, they have typically been very late, way over budget, and -- perhaps most importantly -- incapable of fulfilling their critical requirements for trustworthiness, security, and reliability. In the case of missile-defense systems, there are far too many unknowns; significant risks would always remain.

    Some people advocate attacking incoming objects in the boost phase -- which might seem conceptually easier to detect and pinpoint, although it is likely to inspire earlier deployment of multiple warheads and decoys. Clearly, this concept also has some serious practical limitations. Other alternative approaches (diplomatic, international agreements, mandatory inspections, etc.) also need to be considered, especially if they can result in greater likelihood of success, lower risks of escalation, and enormous cost savings. The choices should not be limited to just the currently proposed U.S. approach and a boost-phase defense, but to other approaches as well -- including less technologically intensive ones.

    Important criteria should include honesty and integrity in assessing the past tests, detailed architectural analyses (currently missing), merits of various other alternatives, and overall risks. Given all of the unknowns and uncertainties in technology and the potential social consequences, the decision process needs to be much more thoughtful, careful, patient, and depoliticized. It should openly address the issues raised by its critics, rather than attempting to hide them. It should encompass the difficulties of defending against unanticipated types of decoys and the likelihood of weapon delivery by other routes. It should not rely solely on technological solutions to problems with strong nontechnological components. Some practical realism is essential. Rushing into a decision to deploy an inherently unworkable concept seems ludicrous, shameful, and wasteful. The ultimate question is this: Reflecting on the track record of similar projects in the past and of software in general, would we trust such a software-intensive system? If we are not willing to trust it, what benefit would it have?

    1. David L. Parnas, Software Aspects of Strategic Defense Systems, American Scientist, 73, 5, Sep-Oct 1985, 432-440; Comm. ACM 28, 12, Dec 1985, 1326-1335. In Computerization and Controversy: Value Conflicts and Social Changes (edited by C. Dunlop and R. Kling), Academic Press, Boston, March 1991; also in other languages. http://www.crl.mcmaster.ca/SERG/parnas.homepg

    ========================================================

    Inside Risks 122, CACM 43, 8, August 2000

    Shrink-Wrapping Our Rights

    Barbara Simons

    Laws relating to computers, software, and the Internet are being proposed and passed at such a breathless rate that even those of us trying to follow them are having trouble keeping up. Unfortunately, some bad laws, such as the Uniform Computer Information Transactions Act (UCITA), are likely to encourage other bad laws, such as proposals to increase surveillance of the Internet. Yet, few people have heard of UCITA, an extraordinary example of a legal proposal with far-reaching consequences. Because commerce is regulated at the state level in the United States, UCITA is being considered in several states; Virginia and Maryland have passed it.

    UCITA will write into state law some of the most egregious excesses contained in shrink-wrap software licenses. These include statements that disclaim liability for any damages caused by the software, regardless of how irresponsible the software manufacturer might have been. Shrink-wrap licenses may forbid reverse engineering, even to fix bugs. Manufacturers may prohibit the non-approved use of proprietary formats. They can prohibit the publication of benchmarking results. By contrast, software vendors may modify the terms of the license, with only e-mail notification. They may remotely disable the software if they decide that the terms of the license have been violated. There is no need for court approval, and it is unlikely that the manufacturer would be held liable for any harm created by the shutdown, whether or not the shutdown was groundless. (The mere existence of such mechanisms is likely to enable denial of service attacks from anywhere.)

    Since a small contractor probably will have a contract that holds him or her liable for damages, the little guy may be forced to pay for damages resulting from buggy commercial software. Furthermore, the small business owner may be unable to sell the software portion of the business to another company, because most shrink-wrap licenses require the permission of the software vendor before a transfer of software can occur.

    Very few manufacturers of other products have the chutzpah to disclaim all liability for any damage whatsoever caused by defects in their products, and most states restrict the effectiveness of such disclaimers. Software vendors base their non-liability claim on the notion that they are selling only licenses, not `goods'. Consequently, so the argument goes, U.S. federal and state consumer protection laws, such as the Magnuson-Moss Warranty Act, do not apply. The strong anti-consumer component of UCITA resulted in opposition from twenty-six state attorneys-general, as well as consumer groups and professional societies such as the IEEE-USA, the U.S. Technology Policy Committee of ACM (USACM), and the Software Engineering Institute (SEI). (See [1] for more information about ACM's activities).

    When most people learn of UCITA, they assume that the unreasonable components of software licenses won't survive court challenges. But because there is very little relevant case law, UCITA could make it difficult for courts to reverse the terms of a shrink wrap license.

    Quoting from the state attorneys-general letter [2], ``We believe the current draft puts forward legal rules that thwart the common sense expectations of buyers and sellers in the real world. We are concerned that the policy choices embodied in these new rules seem to almost invariably favor a relatively small number of vendors to the detriment of millions of businesses and consumers who purchase computer software and subscribe to Internet services. ... [UCITA] rules deviate substantially from long established norms of consumer expectations. We are concerned that these deviations will invite overreaching that will ultimately interfere with the full realization of the potential of e-commerce in our states.''

    We know that it is almost impossible to write bug-free software. But UCITA will remove any legal incentives to develop trustworthy software, because there need be no liability. While the software industry is pressuring the states to pass UCITA, law enforcement is pressuring Congress to enact laws that increase law enforcement's rights to monitor e-mail and the net. Congress, concerned about the insecurities of our information infrastructure, is listening. So, in addition to the risks relating to unsecure and non-robust software implied by UCITA, we also have the risk of increased surveillance and the accompanying threats to speech and privacy.

    If you want to learn about the status of UCITA in your state and how you might get involved, information is available from a coalition of UCITA opponents [3].

    1. http://www.acm.org/usacm/copyright/

    2. http://www.tao.ca/wind/rre/0821.html

    3. http://www.4cite.org

    Barbara Simons has been President of the ACM for the past two years.

    [Added note, not in the CACM: Willis Ware offered the following comments, which are appended herewith. PGN]

    * UCITA acts not only to harm the consumer as pointed out, but it intrudes on the capability of the industry to build secure software; and hence, directly opposes federal efforts to protect the information infrastructure.

    * The Council of Europe is also a threat with its draft treaty on Cybercrime. Its provisions oppose all the good tenets that the software industry has learned with so much difficulty to produce trusted, reliable, and secure software.

    Willis Ware, willis@rand.org

    ========================================================

    Inside Risks 121, CACM 43, 7, July 2000

    Risks in Retrospect

    Peter G. Neumann

    Having now completed ten years of Inside Risks, we reflect here on what has happened in that time. In short, our basic conclusions have not changed much over the years -- despite many advances in the technology. Indeed, this lack of change itself seems like a serious risk. Overall, the potential risks have monotonously if not monotonically become worse, relative to increased system/network vulnerabilities and increased threats, and their consequent domestic and worldwide social implications with respect to national stability, electronic commerce, personal well-being, and many other factors.

    Enormous advances in computing power have diversely challenged our abilities to use information technology intelligently. Distributed systems and the Internet have opened up new possibilities. Security, reliability, and predictability remain seriously inadequate. Privacy, safety, and other socially significant attributes have suffered. Risks have increased in part because of greater complexity, worldwide connectivity, and dependence on systems and people of unknown trustworthiness; vastly many more people are now relying on computers and the Internet; neophytes are diminishing the median level of risk awareness. The mass-market software marketplace eagerly creates new functionality, but is not sufficiently responsive to the needs of critical applications. The development process is often unmanageable for complex systems, which tend to be late, over budget, noncompliant, and in some case cancelled altogether. Much greater discipline is needed. Many efforts seek quick-and-dirty solutions to complex problems, and long-time readers of this column realize how counterproductive that can be in the long run. The electric power industry has evidently gone from a mentality of ``robust'' to ``just-good-enough most-of-the-time''. The monocultural mass-market computer industry seems even less proactive. Off-the-shelf solutions are typically not adequate for mission-critical systems, and in some cases are questionable even in routine uses. The U.S. Government and state legislative bodies are struggling to pass politically appealing measures, but are evidently unable to address most of the deeper issues.

    Distributed and networked systems are inherently risky. Security is a serious problem, but reliability is also -- systems and networks often tend to fall apart on their own, without any provocation. In 1980, we had the accidental complete collapse of the ARPAnet. In 1990, we had the accidental AT&T long-distance collapse. In 1999, Melissa spread itself widely by e-mail infecting Microsoft Outlook users. Just the first few months of 2000 saw extensive distributed denial-of-service attacks (Inside Risks, April 2000) and the ILOVEYOU e-mail Trojan horse that again exploited Microsoft Outlook features, propagating much more widely than Melissa. ILOVEYOU was followed by numerous copycat clones. The cost estimates of ILOVEYOU alone are already in the many billions of dollars (Love's Labor Lost?).

    Ironically, these rather simple attacks have demonstrated that relatively minimal technical sophistication can result in far-reaching effects; furthermore, dramatically less sophistication is required for subsequent copycat attacks. Filtering out attachments to an e-mail message that might contain executable content is not nearly enough. Self-propagating Trojan horses and worms do not require an unsuspecting user to open an attachment -- or even to read e-mail. Any Web page read on a system without significant security precautions represents a threat, considering the capabilities of ActiveX, Java, JavaScript, and PostScript (for example). With many people blindly using underprotected operating systems, the existing systemic vulnerabilities also create massive opportunities for direct penetrations and misuse. Thus, the damage could be much greater than the simple cases thus far. Massive penetrations, denials of service, system crashes, and network outages are characteristically easy to perpetrate, and can be parlayed into coordinated unfriendly-nation attacks on some of our national infrastructures. Much subtler attacks are also possible that might not be detected until too late, such as planting Trojan horses capable of remote monitoring, stealing sensitive information, and systematically compromising backups over a long period of time -- seriously complicating recovery. However, because such attacks have not happened with wide-scale devastation, most people seem to be rather complacent despite their own fundamental lack of adequate information security.

    It is clear that much greater effort is needed to improve the security and robustness of our computer systems. Although many technological advances are emerging in the research community, those that relate to critical systems seem to be of less interest to the commercial development community. Warning signs seem to be largely ignored. Much remains to be done, as has been recommended here for the past ten years.

    Neumann's Website http://www.csl.sri.com/neumann includes ``Risks in Our Information Infrastructures: The Tip of a Titanic Iceberg Is Still All That Is Visible,'' testimony for the May 10, 2000 hearing of the U.S. House Science Committee Subcommittee on Technology 2000, information on the ACM Risks Forum (which PGN moderates), etc.

    ========================================================

    Inside Risks 120, CACM 43, 6, June 2000

    Risks of Internet Voting

    Lauren Weinstein

    Risks in computer-related voting have been discussed here by PGN in November 1990 and by Rebecca Mercuri in November 1992 and 1993. Recently we've seen the rise of a new class of likely risks in this area, directly related to the massive expansion of the Internet and World Wide Web.

    This is not a theoretical issue -- the Arizona Democratic Party recently held their (relatively small) presidential primary, which was reported to be the first legally-binding U.S. public election allowing Web-based voting. Whereas there were problems related to confused voters and overloaded systems, the supporters of the AZ project (including firms providing the technology) touted the election as a major success. In their view, the proof was the increased voter turnout over the party's primary four years earlier (reportedly more than a six-fold increase). But the comparison is basically meaningless, since the previous primary involved an unopposed President Clinton -- hardly a cliffhanger.

    Now other states and even the federal government seem to be on the fast track toward converting every Web browser into a voting machine. In reality, this rush to permit such voting remains a highly risky proposition, riddled with serious technical pitfalls that are rarely discussed.

    Some of these issues are fairly obvious, such as the need to provide for accurate and verifiable vote counts and simultaneously enforcing rigorous authentication of voters (while still making it impossible to retroactively determine how a given person voted). All software involved in the election process should have its source code subject to inspection by trusted outside experts -- not always simple with proprietary ``off-the-shelf'' software. But even with such inspections, these systems are likely to have bugs and problems of various sorts, some of which will not be found and fixed quickly; it's an inescapable aspect of complex software systems.

    Perhaps of far greater concern is the apparent lack of understanding suggested by permitting the use of ordinary PC operating systems and standard Web browsers for Internet voting. The use of digital certificates and ``secure'' Web sites for such voting can help identify connections and protect the communications between voters and the voting servers, but those are not where the biggest risks are lurking. In the recent mass releases of credit-card numbers and other customer information, it was typically the security at the servers themselves at fault, not communications security. The same kinds of security failures leading to private information disclosure or unauthorized modifications are possible with Internet voting, just as in the commercial arena. Also, imagine the ideal targets that Internet voting servers would indeed make for denial of service attacks. What better way to demonstrate power over the Internet than to prevent people from voting as they had expected? At the very least, it would foster inconvenience and anger. Such attacks would also be likely to cause increased concerns regarding how Internet voting might skew voter participation in elections -- between those persons who are Internet-equipped and those who do not have convenient Internet access. Other factors of fairness are also involved, such as the multiple days of voting allowed only for online voters in the Arizona case, or the ways in which online voting might significantly exacerbate the age-old scourge of votes being ``sold'' to other persons.

    Trust in the election process is at the very heart of the world's democracies. Internet voting is a perfect example of an application for which rushing into deployment could have severe negative risks and repercussions of enormous importance.

    Weinstein (lauren@vortex.com) moderates the PRIVACY Forum (http://www.vortex.com/privacy). He is Co-Founder of People For Internet Responsibility (PFIR, http://www.pfir.org), which includes a longer statement on Internet voting.

    ========================================================

    Inside Risks 119, CACM 43, 5, May 2000

    Internet Risks

    Lauren Weinstein and Peter G. Neumann

    The Internet is expanding at an unprecedented rate. However, along with the enormous potential benefits, almost all of the risks discussed here in past columns are relevant, in many cases made worse by the Internet -- for example, due to widespread remote-access capabilities, ever-increasing communication speeds, the Net's exponential growth, and weak infrastructure. This month we summarize some of the risks that are most significant, although we can only skim the surface.

    Internet use is riddled with vulnerabilities relating to security, reliability, service availability, and overall integrity. As noted last month, denials of service are easy to perpetrate. But more serious attacks are also relatively easy, including penetrations, insider misuse, and fraudulent e-mail. Internet video, audio, and voice are creating huge new bandwidth demands that risk overloads. Some organizations that have become hooked on Internet functionality are now incapable of reverting to their previous modes of operation. We cite just a few examples of risks to personal privacy and integrity that are intensified by the Internet:

    * The Internet's vast communications and powerful search engines enable large-scale data abuses. Massive data mining efforts intensify many problems, including identity theft. Cookies are one complex component of Web technology, and possess both positive and highly negative attributes, depending on how they are used.

    * False information abounds, either accidentally or with evil intent.

    * Privacy policies relating to encryption, surveillance, and Net-tapping raise thorny issues. Digital-certificate infrastructures raise integrity problems.

    * Anonymity and pseudo-anonymity have useful purposes but also can foster serious abuses.

    * Obtrusive advertising, spamming, overzealous filtering, Internet gambling (often illegal) are increasing.

    * The many risks involved in Internet voting are not well understood, even as some jurisdictions rush ahead with fundamentally insecure implementations.

    * Nonproprietary free software and open-source software have opened up new challenges.

    The question of who controls the Internet is a tricky matter. In general, the Internet's lack of central control is both a blessing and a curse! Various governments seem to desire pervasive Internet monitoring capabilities, and in some cases also to control access and content. Many corporate interests and privacy advocates want to avoid such scenarios in most cases. Domain naming is controversial and exacerbates a number of intellectual property and other issues that already present problems. Mergers are tending to reduce competition.

    The global nature of the Internet intensifies many of the problems that previously seemed less critical. Local, national, and international jurisdictional issues are complicated by the lack of geographical boundaries. Legislatures are rushing to pass new laws, often without understanding technological realities.

    The Uniform Computer Information Transaction Act (UCITA) is currently being considered by state legislatures. Although championed by proprietary software concerns, it has received strong opposition from 24 state Attorneys General, the Bureau of Consumer Protection, the Policy Planning Office of the Federal Trade Commission, professional and trade associations, and many consumer groups. It tends to absolve vendors from liability, and could be a serious impediment to security research. Opposition views from USACM and IEEE-USA are at http://www.acm.org/usacm/copyright/ and http://www.ieeeusa.org/forum/POSITIONS/ucita.html, respectively.

    There are also many social issues, including the so-called digital divide between the technological haves and have-nots. Educational institutions are increasingly using the Internet, providing the potential for wonderful resources, but also frequently as something of a lowest common denominator in the learning process. Controversies over the mandated use of seriously flawed filtering technology in Internet environments further muddy the situation.

    The potentials of the Internet must be tempered with some common sense relating to the risks of misuse and abuse. Technological solutions to social problems have proven to be generally ineffective, as have social solutions to technological problems. It is crucial that we all become active, as individuals, organizations, and communities, in efforts to bring some reasonable balance to these increasingly critical issues. The benefits generally outweigh the risks, but let's not ignore the risks!

    Weinstein (lauren@vortex.com) moderates the PRIVACY Forum (http://www.vortex.com/privacy) and Neumann (neumann@csl.sri.com) moderates the ACM Risks Forum (http://catless.ncl.ac.uk/Risks). They are also the founders of People For Internet Responsibility (PFIR - http://www.pfir.org), which has assembled a growing enumeration of Internet risks issues as well as position statements on Internet voting, legislation, hacking, etc. There are of course many organizations devoted to particular subsets of these important issues. We hope that PFIR will be an effective resource in working with them on a wide range of Internet issues.

    ========================================================

    Inside Risks 118, CACM 43, 4, April 2000

    Denial-of-Service Attacks

    Peter G. Neumann

    WARNING: Although it is April, this is neither an April Fools' column nor a foolish concern.

    A Funny Thing Happened on my Way to the (Risks) Forum this month. I had planned to write a column on the ever burgeoning risks of denial-of-service (DoS) attacks relating to the Internet, private networks, computer systems, cable modems and DSL (for which spoofing is a serious risk), and the critical infrastructures that we considered here in January 1998.

    DoS threats are rampant, although there are only a few previous cases in the RISKS archives -- for example, involving attacks on PANIX, WebCom, and Australian communications. There are many DoS types that do not even require direct access to the computer systems being attacked. Instead, those attacks are able to exploit fundamental architectural deficiencies external to the systems themselves rather than just widespread weak links that permit internal exploitations.

    Well, just as I started to write this column in February 2000, an amazing thing happened. Within a three-day period, Yahoo, Amazon, eBay, CNN.com, Buy.com, ZDNet, E*Trade, and Excite.com were all subjected to total or regional outages of several hours caused by distributed denial-of-service (DDoS) attacks -- that is, multiple DoS attacks from multiple sources. Media moguls seem to have been surprised, but the DDoS concepts have been around for many years.

    Simple DoS flooding attacks (smurf, syn, ping-of-death) can be carried out remotely over the Net, without any system penetrations. Other DoS attacks may exploit security vulnerabilities that permit penetrations, followed by crashes or resource exhaustion. Some DDoS attack scripts (Trinoo, Tribal Flood Network TFN and TFN2K, Stacheldraht) combine two modes, using the Internet to install attack software on multiple unwitting intermediary systems (``zombies''), from which simultaneous DoS attacks can be launched on target systems without requiring penetrations. In general, DDoS attacks can cause massive outages, as well as serious congestion even on unattacked sites.

    DoS attacks are somewhat like viruses -- some specific instances can be detected and blocked, but no general preventive solutions exist today or are likely in the future. DDoS attacks are even more insidious. They are difficult to detect because they can come from many sources; trace-back is greatly complicated when they use spoofed IP addresses.

    Common security advice can help a little in combatting DDoS: install and properly configure firewalls (blocking nasty traffic); isolate machines from the Net when connections are not needed; demand cryptographic authenticators rather than reusable fixed passwords, to reduce masqueraders. But those ideas are clearly not enough. We also need network protocols that are less vulnerable to attack and that more effectively accommodate emerging applications (interactive and noninteractive, symmetric and asymmetric, broadcast and point-to-point, etc.) -- for example, blocking bogus IP addresses. For starters, we need firewalls and routers that are more defensive; cryptographic authentication among trustworthy sites; systems with fewer flaws and fewer risky features; monitoring that enables early warnings and automated reconfiguration; constraints on Internet service providers to isolate bad traffic; systems and networks that can be more easily administered; and much greater collaboration among different system administrations.

    As attack scripts become increasingly available, DoS and DDoS attacks become even more trivial to launch. It is probably naive to hope that the novelty of these attacks might wear off (which is what many people hoped in the early days of viruses, although today there are reportedly over 50,000 virus types). But if the attacks were to disappear for a while, the incentives to address the problem might also diminish.

    The FBI and its National Infrastructure Protection Center (NIPC) are taking a role in trying to track down attackers, but the flakiness of the technology itself makes tracing difficult. Above all, it is clear that this is a problem in desperate need of some technological and operational approaches; relying on law enforcement as a deterrent is not adequate -- especially against attacks mounted from outside of the U.S. This is not just a national problem: every computerized nation has similar risks, and attacks on any site can be launched from anywhere in the world.

    The Internet has grown without overall architectural design (as have many of its applications). Although this may have accelerated expansion, some current uses vastly exceed what is prudent. We urgently need to launch a concerted effort to improve the security and robustness of our computer-communication infrastructures. The recent denial-of-service problems are only a foretaste of what could happen otherwise.

    See Results of the Distributed-Systems Intruder Tools Workshop http://www.cert.org/reports/dsit_workshop.pdf and some partial antidotes such as for Trinoo http://www.fbi.gov/nipc/trinoo.htm . Peter Neumann is the Moderator of the on-line Risks Forum (comp.risks).

    ========================================================

    Inside Risks 117, CACM 43, 3, March 2000

    A Tale of Two Thousands

    Peter G. Neumann

    It was the best of times, it was the worst of times, but now it is time to reflect on the lessons of Y2K. Ironically, if the extensive media hype had not stimulated significant progress in the past half-year, serious social disruptions could have occurred. However, the colossal remediation effort is simultaneously (1) a success story that improved systems and people's technical knowledge, (2) a wonderful opportunity to have gotten rid of some obsolete systems (although there were some unnecessary hardware upgrades where software fixes would have sufficed), and (3) a manifestation of long-term short-sightedness. After spending billions of dollars worldwide, we must wonder why a little more foresight did not avoid many of the Y2K problems sooner.

    * System development practice. System development should be based on constructive measures throughout the life-cycle, on well-specified requirements, system architectures that are inherently sound, and intelligently applied system engineering and software engineering. The Y2K problem is a painful example of the absence of good practice -- somewhat akin to its much neglected but long-suffering stepchild, the less glitzy but persistent buffer-overflow problem. For example, systematic use of concepts such as abstraction, encapsulation, information hiding, and object-orientation could have allowed the construction of efficient programs in which the representation of dates could be changed easily when needed.

    * Integrity of remediation. In the rush to remediation, relatively little attention was paid to the integrity of the process and ensuing risks. Many would-be fixes introduced new bugs. Windowing deferred some problems until later. Opportunities existed for theft of proprietary software, blackmail, financial fraud, and insertion of Trojan horses -- some of which may not be evident for some time.

    * What happened? In addition to various problems triggered before the new year, there were many Y2K date-time screwups. See the on-line Risks Forum, volume 20, beginning with issue 71, and http://www.csl.sri.com/neumann/cal.html for background. The Pentagon had a self-inflicted Y2K mis-fix that resulted in complete loss of ability to process satellite intelligence data for 2.5 hours at midnight GMT on the year turnover, with the fix for that leaving only a trickle of data from 5 satellites for several days afterward. The Pentagon DefenseLINK site was disabled by a preventive mistake. The Kremlin press office could not send e-mail. In New Zealand, an automated radio station kept playing the New Year's Eve 11pm news hour as most recent, because 99 is greater than 00. Toronto abandoned their non-Y2K-compliant bus schedule information system altogether, rather than fix it. Birth certificates for British newborns were for 1900. Some credit-card machines failed, and some banks repeatedly charged for the same transaction -- once a day until a previously available fix was finally installed. Various people received bills for cumulative interest since 1900. At least one person was temporarily rich, for the same reason. In e-mail, Web sites, and other applications, strange years were observed beginning on New Year's Day (and continuing until patched), notably the years 100 (99+1), 19100 (19 concatenated with 99+1), 19000 (19 concatenated with 99+1 (mod 100)), 1900, 2100, 3900, and even 20100. Some Compaq sites said it was Jan 2 on Jan 1. U.K.'s NPL atomic clock read Dec 31 1999 27:00 at 2am GMT on New Year's Day. But all of these anomalies should be no surprise; as we noted here in January 1991, calendar arithmetic is a tricky business, even in the hands of expert programmers.

    * Conclusions: Local optimization certainly seems advantageous in the short term (e.g., to reduce immediate costs), but is often counterproductive in the long term. The security and safety communities (among others) have long maintained that trying to retrofit quality into poorly conceived systems is throwing good money after bad. It is better to do things right from the outset, with a clear strategy for evolution and analysis -- so that mistakes can be readily fixed whenever they are recognized. Designing for evolvability, interoperability, and the desired functional ``-ities'' (such as security, reliability, survivability in the presence of arbitrary adversities) is difficult. Perhaps this column should have been entitled ``A Tale of Two -ities'' -- predictability and dependability, both of which are greatly simplified when the requirements are defined in advance.

    Between grumbles about the large cost of Y2K remediation and views on what might have happened had there not been such an intensive remediation effort, we still have much to learn. (Will this experience be repeated for Y10K?) Perhaps the biggest Y2K lessons are simply further reminders that greater foresight would have been beneficial, that fixes themselves are prone to errors, and that testing is inherently incomplete (especially merely advancing a clock to New Year's Eve and observing the rollover). We need better system-oriented education and training. Maybe it is also time for certification of developers, especially when dealing with critical systems.

    http://catless.ncl.ac.uk/Risks/ and ftp://www.sri.com/risks/ house the official archives for the ACM Risks Forum, moderated by PGN.

    ========================================================

    Inside Risks 116, CACM 43, 2, February 2000

    Risks of PKI: Electronic Commerce

    Carl Ellison and Bruce Schneier

    [*** NOTE *** The next-to-last sentence in the first paragraph of this column below is the correct version. Somehow the version printed in the CACM was garbled. While I am editorializing, I might mention Understanding Public-Key Infrastructure by Carlisle Adams and Steve Lloyd, MacMillan, 1999. PGN]

    Open any popular article on public-key infrastructure (PKI) and you're likely to read that a PKI is desperately needed for E-commerce to flourish. Don't believe it. E-commerce is flourishing, PKI or no PKI. Web sites are happy to take your order if you don't have a certificate and even if you don't use a secure connection. Fortunately, you're protected by credit-card rules.

    The main risk in believing this popular falsehood stems from the cryptographic concept of ``non-repudiation''.

    Under old, symmetric-key cryptography, the analog to a digital signature was a message authentication code (MAC). If Bob received a message with a correct MAC, he could verify that it hadn't changed since the MAC was computed. If only he and Alice knew the key needed to compute the MAC and if he didn't compute it, Alice must have. This is fine for the interaction between them, but if the message was ``Pay Bob $1,000,000.00, signed Alice'' and Alice denied having sent it, Bob could not go to a judge and prove that Alice sent it. He could have computed the MAC himself.

    A digital signature does not have this failing. Only Alice could have computed the signature. Bob and the judge can both verify it without having the ability to compute it. That is ``non-repudiation'': the signer cannot credibly deny having made the signature. Since Diffie and Hellman discussed this concept in their 1976 paper, it has become part of the conventional wisdom of the field and has made its way into standards documents and various digital signature laws.

    However, practice differs from theory.

    Alice's digital signature does not prove that Alice signed the message, only that her private key did. When writing about non-repudiation, cryptographic theorists often ignore a messy detail that lies between Alice and her key: her computer. If her computer were appropriately infected, the malicious code could use her key to sign documents without her knowledge or permission. Even if she needed to give explicit approval for each signature (e.g., via a fingerprint scanner), the malicious code could wait until she approved a signature and sign its own message instead of hers. If the private key is not in tamper-resistant hardware, the malicious code can just steal the key as soon as it's used.

    While it's legitimate to ignore such details in cryptographic research papers, it is just plain wrong to assume that real computer systems implement the theoretical ideal. Our computers may contain viruses. They may be accessible to passers-by who could plant malicious code or manually sign things with our keys. Should we then need to deny some signature, we would have the burden of proving the negative: that we didn't make the signature in question against the presumption that we did.

    Digital signatures are not the first mechanical signatures. There have been check-writing machines for at least 50 years but in the USA their signatures are not legally binding without a contract between two parties declaring them acceptable. Digital signatures are proposed to be binding without such a contract. Yet, the computers doing digital signatures are harder to secure than mechanical check-writers that could be locked away between uses.

    Other uses of PKI for E-commerce are tamer, but there are risks there too.

    A CA signing SSL server certificates may have none of the problems described above, but that doesn't imply that the lock in the corner of your browser window means that the web page came from where it says it did. SSL deals with URLs, not with page contents, but people actually judge where a page came from by the logos displayed on the page, not by its URL and certainly not by some certificate they never look at.

    Using SSL client certificates as if they carried E-commerce meaning is also risky. They give a name for the client, but a merchant needs to know if it will be paid. Client certificates don't speak to that. Digital signatures might be used with reasonable security for business-to-business transactions. Businesses can afford to turn signing computers into single-function devices, kept off the net and physically available only to approved people. Two businesses can sign a paper contract listing signature keys they will use and declaring that digital signatures will be accepted. This has reasonable security and reflects business practices, but it doesn't need any PKI -- and a PKI might actually diminish security.

    Independent of its security problems, it seems that PKI is becoming a big business. Caveat emptor.

    For more details, see http://www.counterpane.com/PKI-risks.html.

    Carl Ellison is a security architect at Intel in Hillsboro Oregon. Bruce Schneier is the CTO of Counterpane.

    [See January 2000: Risks of PKI: Secure E-Mail]

    ========================================================

    Inside Risks 115, CACM 43, 1, January 2000

    Risks of PKI: Secure E-Mail

    Carl Ellison and Bruce Schneier

    Public-key infrastructure (PKI), usually meaning digital certificates from a commercial or corporate certificate authority (CA), is touted as the current cure-all for security problems.

    Certificates provide an attractive business model. They cost almost nothing to manufacture, and you can dream of selling one a year to everyone on the Internet. Given that much potential income for CAs, we now see many commercial CAs, producing literature, press briefings and lobbying. But, what good are certificates? In particular, are they any good for E-mail? What about free certificates, as with PGP?

    For e-mail, you want to establish whether a given keyholder is the person you think or want it to be. When you verify signed e-mail, you hope to establish who sent the message. When you encrypt e-mail to a public key, you need to know who will be capable of reading it. This is the job certificates claim to do.

    An ID certificate is a digitally signed message from the issuer (signer or CA) to the verifier (user) associating a name with a public key. But, using one involves risks.

    The first risk is that the certificate signer might be compromised, through theft of signing key or corruption of personnel. Good commercial CAs address this risk with strong network, physical and personnel security. PGP addresses it with the ``web of trust'' - independent signatures on the same certificate.

    The next risk is addressed unevenly. How did the signer know the information being certified? PGP key signers are instructed to know the person whose key is being signed, personally, but commercial CAs often operate on-line, without meeting the people whose keys they sign. One CA was started by a credit bureau, using their existing database for online authentication. Online authentication works if you have a shared secret, but there are no secrets in a credit bureau's database because that data is for sale. Therefore, normal identity theft should be sufficient to get such a certificate. Worse, since credit bureaus are so good at collecting and selling data, any CA is hard pressed to find data for authentication that is not already available through some credit bureau.

    The next risk is rarely addressed. ID certificates are good only in small communities. That's because they use people's names. For example, one company has employees named: john.wilson, john.a.wilson, john.t.wilson, john.h.wilson and jon.h.wilson. When you met Mr. Wilson, did you ask which one he was? Did you even know you needed to ask? That's just one company, not the whole Internet. Name confusion in unsecured e-mail leads to funny stories and maybe embarrassment. Name confusion in certificates leads to faulty security decisions.

    To a commercial CA, the more clients it has the better. But the more it succeeds, the less meaningful its certificates become. Addressing this problem requires work on your part. You need to keep your namespace under control. With PGP, you could mark keys ``trusted'' (acting as a CA) only if they certify a small community (e.g., project members), otherwise, you could sign keys personally, and only when the certified name is meaningful to you. With some S/MIME mailers, you could disable trust in any CA that has too many (over 500?) clients and personally mark individual keys trusted instead. Meanwhile, you can print your public key fingerprint (a hash value, sometimes called a thumbprint) on your business cards, so that others can certify/trust your key individually.

    There are other risks, also.

    Did the issuer verify that the keyholder controlled the associated private key? That's what the certificate claims.

    Does your mail agent check for key or certificate revocation? Few do.

    Finally, how well are the computers at both ends protected? Are private keys protected by password, and if so, how strong? Are they used in tamper-resistant hardware or merely in software? Do you have to provide the password for each operation or is it cached? Is the encryption code itself protected from tampering? Are public (root) keys protected at all? Usually they aren't but they need to be to prevent false signature verification or encryption to an eavesdropper's key. Can a physical passer-by sign something with the signer's key or tamper with the software or public key storage? Is your machine always locked?

    Real security is hard work. There is no cure-all, especially not PKI.

    For more details, see http://www.counterpane.com/PKI-risks.html.

    [February 2000: Risks of PKI in electronic commerce.]

    ========================================================

    Inside Risks 114, CACM 42, 12, December 1999

    Risks of Insiders

    Peter G. Neumann

    This month we consider some of the risks associated with insiders. For present purposes, an insider is simply someone who has been (explicitly or implicitly) granted privileges that authorize him or her to use a particular system or facility. This concept is clearly relative to virtual space and real time, because at any given moment a user may be an insider with respect to some services and an outsider with respect to others, with different degrees of privilege. In essence, insider misuse involves misuse of authorized privileges.

    Recent incidents have heightened awareness of the problems associated with insider misuse -- such as the Department of Energy's long-term losses of supposedly protected information within a generally collegial environment, and the Bank of New York's discovery of the laundering of billions of dollars involving Russian organized crime. The RISKS archives include many cases of insider misuse, with an abundance of financial fraud and other cases of intentional misuse by privileged personnel in law enforcement, intelligence, government tax agencies, motor-vehicle and medical databases. In addition, there are many cases of accidental insider screwups in financial services, medical applications, critical infrastructures, and computer system security administration. Accidental misuse may be effectively indistinguishable from intentional misuse, and in some cases has been claimed as a cover-up for intentional misuse. Related potential risks of insider misuse have been discussed previously on this page, such as in cryptographic key management and electronic voting systems.

    Although much concern has been devoted in the past to penetrations and other misuse by outsiders, insider threats have long represented serious problems in government and private computer-communication systems. However, until recently the risks have been largely ignored by system developers, application purveyors, and indeed governments.

    Today's operating systems and security-relevant application software frequently do not provide fine-grained differential access controls that can distinguish among different trusted users. Furthermore, there are often all-powerful administrator root privileges that are undifferentiated. In addition, many systems typically do not provide serious authentication (that is, something other than fixed reusable passwords flying around unencrypted) and basic system protection that might otherwise prevent insiders from masquerading as one another and making subversive alterations of systems and data.

    Too often it is assumed that once a user has been granted access, that user should then have widespread access to almost everything. (Furthermore, even when that assumption is not made, it is often difficult to prevent outsiders from becoming insiders.) Audit trails are typically inadequate (particularly with respect to insider misuse), and in some cases compromisable by privileged insiders. Existing commercial software for detecting misuse are oriented primarily toward intrusions by outsiders, not misuse by insiders (although a few ongoing research efforts are not so limited). Even more important, there is typically not even a definition of what constitutes insider misuse in any given system or application. Where there is no such definition of misuse, insider misuse certainly becomes difficult to detect! There are many such reasons why it is difficult to address the insider misuse problem.

    Insiders may have various advantages beyond just allocated privileges and access, such as better knowledge of system vulnerabilities and the whereabouts of sensitive information, and the availability of implicitly high human levels of trust within sensitive enclaves.

    We need better definitions of what is meant by insider misuse in specific applications (accidental and intentional, and in the latter case malicious and otherwise), better defenses to protect against such misuse, better techniques for detecting misuse when it cannot be prevented, better techniques for assessing the damage once misuse has been detected, and then better techniques for subsequent remediation to whatever extent is possible and prudent -- consistent with the desired security requirements. Techniques such as separation of duties, two-person controls, encryption with split keys, and enlightened management can also contribute. A comprehensive approach is essential.

    A Workshop on Preventing, Detecting, and Responding to Malicious Insider Misuse was held in Santa Monica, CA, August 16-18, 1999, sponsored by several U.S. Government organizations. The purpose of the workshop was to address the issues outlined above. The report of that workshop is now available on-line (http://www2.csl.sri.com/insider-misuse/). It surveys the problems presented by insider misuse and outlines various approaches that were proposed at the workshop. The report is recommended reading for those of you concerned with these problems.

    Peter Neumann (http://www.csl.sri.com/neumann/) is the Moderator of the on-line Risks Forum (comp.risks).

    ========================================================

    Inside Risks 113, CACM 42, 11, November 1999

    Risks of Content Filtering

    Peter G. Neumann and Lauren Weinstein

    [Note: This is an adaptation of the version originally submitted to ACM. The quote cited from [2] was omitted from the final version by the CACM Editor's, because of space limitations. PGN]

    The Internet and World Wide Web may be the ultimate double-edged swords. They bring diverse opportunities and risks. Just about anything anyone might want is on the Net, from the sublime to the truly evil. Some categories of information could induce argument forever, such as what is obscene or harmful, whereas others may be more easily categorized -- hate literature, direct misinformation, slander, libel, and other writings or images that serve no purpose other than to hurt or destroy.

    Proposed legal sanctions, social pressures, and technological means to prevent or limit access to what is considered detrimental all appear to be inadequate as well as full of risky side-effects.

    Web self-rating is a popular notion, and is being promoted by the recent ``Internet Content Summit'' as an alternative to government regulation. The ACLU believes both government intervention and self-rating are undesirable, because self-rating schemes will cause controversial speech to be censored, and will be burdensome and costly. The ACLU also points out that self-rating will encourage rather than prevent government regulation, by creating the infrastructure necessary for government-enforced controls. There's also a concern that self-rating schemes will turn the Internet into a homogenized environment dominated exclusively by large commercial media operations [1, 1--19]. Furthermore, what happens to sites that refuse to rate themselves ( persona non grata status?), or whose self-ratings are disputed? It seems to be a no-win situation.

    The reliability of third-party filtering is notoriously low. As noted in RISKS, sites such as ``middlesex.gov'' and ``SuperBowlxxx.com'' were blocked simply due to their domain names. Commercial site-censoring filters have blocked NOW, EFF, Mother Jones, HotWired, Planned Parenthood, and many others [1, 29--31]. The PRIVACY Forum was blocked by a popular commercial filter, when one of their raters equated discussions of cryptography social issues with prohibited ``criminal skills!'' Sites may not know that they've been blocked (there usually is no notification), and procedures for appealing blocking are typically unavailable or inadequate.

    In a survey comparing a traditional search engine with a popular ``family-friendly'' search engine, the Electronic Privacy Information Center attempted to access such phrases as American Red Cross, San Diego Zoo, Smithsonian, Christianity, and Bill of Rights. In every case, the ``friendly'' engine prevented access to 90% of the relevant materials, and in some cases 99% of what would be available without filters [1, 53--66]. Remarkable!

    The Utah Education Network (www.uen.org) used filtering software that blocked public schools and libraries from accessing the Declaration of Independence, the U.S. Constitution, George Washington's Farewell Address, the Bible, the Book of Mormon, the Koran, all of Shakespeare's plays, standard literary works, and many completely noncontroversial Web sites [1, 67--81]. Efforts to link federal funding to the mandatory use of filters in libraries, schools, and other organizations are clearly coercive and counterproductive.

    With respect to children's use of the Internet, there is no adequate universal definition of ``harmful to minors,'' nor is such a definition ever likely to be satisfactory. Attempts to mandate removal of vaguely-defined ``harmful" materials from the Internet (and perhaps the next step, from bookstores?) can result only in confusion and the creation of a new class of forbidden materials which will become even more sought after!

    Parents need to reassert guidance roles that they often abdicate. Children are clearly at risk today, but not always in the manners that some politicians would have us believe. ``Indeed, perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection'' [2]. Responsible parenting is not merely plopping kids down alone in front of a computer screen and depending on inherently defective filtering technology that is touted as both allowing them to be educated and ``protecting'' them.

    As always in considering risks, there are no easy answers -- despite the continual stampede to implement incomplete solutions addressing only tiny portions of particular issues, while creating all sorts of new problems. Freedom of speech matters are particularly thorny, and seemingly among the first to be sublimated by commercial interests and seekers of simplistic answers. ``Filters and Freedom,'' an extraordinary collection of information on these topics [1] should be required reading.

    We must seek constructive alternatives, most likely nontechnological in nature. However, we may ultimately find few, if any, truly workable alternatives between total freedom of speech (including its dark side) and the specter of draconian censorship. With the Net still in its infancy, we haven't begun to understand the ramifications of what will certainly be some of the preeminent issues of the next century.

    References

    1. Filters and Freedom: Free Speech Perspectives on Internet Content Controls, David L. Sobel (Ed.), www.epic.org, ISBN 1-893044-06-8. http://www.epic.org/filters&freedom/

    (See Fahrenheit 451.2: Is Cyberspace Burning? How Rating and Blocking Proposals May Torch Free Speech on the Internet, ACLU, Reference 1, pages 1--19.)

    (See Sites Censored by Censorship Software, Peacefire, Reference 1, pages 29--31.)

    (See Faulty Filters: How Content Filters Block Access to Kid-Friendly Information on the Internet, EPIC, Reference 1, pages 53--66.)

    (See Censored Internet Access in Utah Public Schools and Libraries, Censorware, Reference 1, pages 67--81.)

    2. ACLU v. Reno (``Reno II''), 31 F. Supp. 2d 473 (E.D.Pa. 1999) at 498 (Memorandum Opinion enjoining enforcement of the Child Online Protection Act).

    "Harry J. Foxwell" responded with the following suggestion:

    ``My current solution: adult supervision, and no filters. See: HREF="http://mason.gmu.edu/~hfoxwell/fieldtrip.html"> http://mason.gmu.edu/~hfoxwell/fieldtrip.html .''

    ========================================================

  • Risks of Relying on Cryptography, Schneier, Oct 1999
  • The Trojan Horse Race, Schneier, Sep 1999
  • Biometrics: Uses and Abuses, Schneier, Aug 1999
  • Information is a Double-Edged Sword, PGN, Jul 1999
  • Risks of Y2K, PGN, Jun 1999
  • Ten Myths about Y2K Inspections, Parnas, May 1999
  • A Matter of Bandwidth, Weinstein, Apr 1999
  • Bit-Rot Roulette, Weinstein, Mar 1999
  • Robust Open-Source Software, PGN, Feb 1999
  • Our Evolving Public Telephone Networks, Schneider/Bellovin, Jan 1999
  • The Risks of Hubris, Ladkin, Dec 1998
  • Towards Trustworthy Networked Information Systems, Schneider, Nov 1998
  • Risks of E-Education, PGN, Oct 1998
  • Y2K Update, PGN, Sep 1998
  • Computer Science and Software Engineering: Filing for Divorce?, P.Denning, Aug 1998
  • Laptops in Congress?, PGN, Jul 1998
  • Infrastructure Risk Reduction, Lawson, Jun 1998
  • In Search of Academic Integrity, Mercuri, May 1998
  • On Concurrent Programming, Schneider, Apr 1998
  • Are Computers Addictive?, PGN, Mar 1998
  • Internet Gambling, PGN, Feb 1998
  • Protecting the Infrastructures, PGN, Jan 1998
  • More System Development Woes, PGN, Dec 1997
  • Software Engineering: An Unconsummated Marriage, David L. Parnas, Sep 1997
  • Spam, Spam, Spam!, PGN and Lauren Weinstein, Jun 1997
  • Webware Security, Ed Felten, Apr 1997
  • Corrupted Polling, Mercuri, Nov 1993
  • Voting-Machine Risks, Mercuri, Nov 1992
  • Should Computer Professionals Be Certified?, PGN, Feb 1991
  • Risks in Computerized Elections, PGN, Nov 1990
  • ========================================================

    Inside Risks 112, CACM 42, 10, October 1999

    Risks of Relying on Cryptography

    Bruce Schneier

    Cryptography is often treated as if it were magic security dust: ``sprinkle some on your system, and it is secure; then, you're secure as long as the key length is large enough--112 bits, 128 bits, 256 bits'' (I've even seen companies boast of 16,000 bits.) ``Sure, there are always new developments in cryptanalysis, but we've never seen an operationally useful cryptanalytic attack against a standard algorithm. Even the analyses of DES aren't any better than brute force in most operational situations. As long as you use a conservative published algorithm, you're secure.''

    This just isn't true. Recently we've seen attacks that hack into the mathematics of cryptography and go beyond traditional cryptanalysis, forcing cryptography to do something new, different, and unexpected. For example:

    * Using information about timing, power consumption, and radiation of a device when it executes a cryptographic algorithm, cryptanalysts have been able to break smart cards and other would-be secure tokens. These are called ``side-channel attacks.''

    * By forcing faults during operation, cryptanalysts have been able to break even more smart cards. This is called ``failure analysis.'' Similarly, cryptanalysts have been able to break other algorithms based on how systems respond to legitimate errors.

    * One researcher was able to break RSA-signed messages when formatted using the PKCS standard. He did not break RSA, but rather the way it was used. Just think of the beauty: we don't know how to factor large numbers effectively, and we don't know how to break RSA. But if you use RSA in a certain common way, then in some implementations it is possible to break the security of RSA ... without breaking RSA.

    * Cryptanalysts have analyzed many systems by breaking the pseudorandom number generators used to supply cryptographic keys. The cryptographic algorithms might be secure, but the key-generation procedures were not. Again, think of the beauty: the algorithm is secure, but the method to produce keys for the algorithm has a weakness, which means that there aren't as many possible keys as there should be.

    * Researchers have broken cryptographic systems by looking at the way different keys are related to each other. Each key might be secure, but the combination of several related keys can be enough to cryptanalyze the system.

    The common thread through all of these exploits is that they've all pushed the envelope of what constitutes cryptanalysis by using out-of-band information to determine the keys. Before side-channel attacks, the open crypto community did not think about using information other than the plaintext and the ciphertext to attack algorithms. After the first paper, researchers began to look at invasive side channels, attacks based on introducing transient and permanent faults, and other side channels. Suddenly there was a whole new way to do cryptanalysis.

    Several years ago I was talking with an NSA employee about a particular exploit. He told about how a system was broken; it was a sneaky attack, one that I didn't think should even count. ``That's cheating,'' I said. He looked at me as if I'd just arrived from Neptune.

    ``Defense against cheating'' (that is, not playing by the assumed rules) is one of the basic tenets of security engineering. Conventional engineering is about making things work. It's the genesis of the term ``hack,'' as in ``he worked all night and hacked the code together.'' The code works; it doesn't matter what it looks like. Security engineering is different; it's about making sure things don't do something they shouldn't. It's making sure security isn't broken, even in the presence of a malicious adversary who does everything in his power to make sure that things don't work in the worst possible way at the worst possible times. A good attack is one that the engineers never even thought about.

    Defending against these unknown attacks is impossible, but the risk can be mitigated with good system design. The mantra of any good security engineer is: "Security is a not a product, but a process." It's more than designing strong cryptography into a system; it's designing the entire system such that all security measures, including cryptography, work together. It's designing the entire system so that when the unexpected attack comes from nowhere, the system can be upgraded and resecured. It's never a matter of "if a security flaw is found," but "when a security flaw is found."

    This isn't a temporary problem. Cryptanalysts will forever be pushing the envelope of attacks. And whenever crypto is used to protect massive financial resources (especially with world-wide master keys), these violations of designers' assumptions can be expected to be used more aggressively by malicious attackers. As our society becomes more reliant on a digital infrastructure, the process of security must be designed in from the beginning.

    Bruce Schneier is CTO of Counterpane Internet Security, Inc. You can subscribe to his free e-mail newsletter, Crypto-Gram, at http://www.counterpane.com.

    =======================================================

    Inside Risks 111, CACM 42, 9, September 1999

    The Trojan Horse Race

    Bruce Schneier

    1999 is a pivotal year for malicious software ( malware) such as viruses, worms, and Trojan horses. Although the problem is not new, Internet growth and weak system security have evidently increased the risks.

    Viruses and worms survive by moving from computer to computer. Prior to the Internet, computers (and viruses!) communicated relatively slowly, mostly through floppy disks and bulletin boards. Antivirus programs were initially fairly effective at blocking known types of malware entering personal computers, especially when there were only a handful of viruses. But now there are over 10,000 virus types; with e-mail and Internet connectivity, the opportunities and speed of propagation have increased dramatically.

    Things have changed, as in the Melissa virus, the Worm.ExploreZip worm, and their inevitable variants, which arrive via e-mail and use e-mail software features to replicate themselves across the network. They mail themselves to people known to the infected host, enticing the recipients to open or run them. They propagate almost instantaneously. Antiviral software cannot possibly keep up. And e-mail is everywhere. It runs over Internet connections that block everything else. It tunnels through firewalls. Everyone uses it.

    Melissa uses features in Microsoft Word (with variants using Excel) to automatically e-mail itself to others, and Melissa and Worm.ExploreZip make use of the automatic mail features of Microsoft Outlook. Microsoft is certainly to blame for creating the powerful macro capabilities of Word and Excel, blurring the distinction between executable files (which can be dangerous) and data files (which hitherto seemed safe). They will be to blame when Outlook 2000, which supports HTML, makes it possible for users to be attacked by HTML-based malware simply by opening e-mail. DOS set the security state-of-the-art back 25 years, and MS has continued that legacy to this day. They certainly have a lot to answer for, but the real cause is more subtle.

    It's easy to point fingers, including at virus creators or at the media for publicity begetting further malware. But a basic problem is the permissive nature of the Internet and computers attached to it. As long as a program has the ability to do anything on the computer it is running, malware will be incredibly dangerous. Just as firewalls protect different computers on the same network, we're going to need something to protect different processes running on the same computer.

    This malware cannot be stopped at the firewall, because e-mail tunnels it through a firewall, and then pops up on the inside and does damage. Thus far, the examples have been mild, but they represent a proof of concept. The effectiveness of firewalls will diminish as we open up more services (e-mail, Web, etc.), as we add increasingly complex applications on the internal net, and as misusers catch on. This ``tunnel-inside-and-play'' technique will only get worse.

    Another problem is rich content. We know we have to make Internet applications (sendmail, rlogin) more secure. Melissa exploits security problems in Microsoft Word, others exploit Excel. Suddenly, these are network applications. Has anyone bothered to check for buffer overflow bugs in pdf viewers? Now, we must.

    Antivirus software can't help much. If Melissa can infect 1.2 million computers in the hours before a fix is released, that's a lot of damage. What if the code took pains to hide itself, so that a virus remained hidden? What if a worm just targeted an individual; it would delete itself off any computer whose userID didn't match a certain reference? How long would it take before that one was discovered? What if it e-mailed a copy of the user's login script (most contain passwords) to an anonymous e-mail box before self-erasing? What if it automatically encrypted outgoing copies of itself with PGP or S/MIME? Or signed itself? (Signing keys are often left lying around.) What about Back Orifice for NT? Even a few minutes' thought yields some pretty scary possibilities.

    It's impossible to push the problem off onto users with ``do you trust this message/macro/application?'' confirmations. Sure, it's unwise to run executables from strangers, but both Melissa and Worm.ExploreZip arrive pretending to be friends and associates of the recipient. Worm.ExploreZip even replied to real subject lines. Users can't make good security decisions under ideal conditions; they don't stand a chance against malware capable of social engineering.

    What we're seeing is the convergence of several problems: the inadequate security in personal-computer operating systems, the permissiveness of networks, interconnections between applications on modern operating systems, e-mail as a vector to tunnel through network defenses and as a means to spread extremely rapidly, and the traditional naivete of users. Simple patches are inadequate. A large distributed system communicating at the speed of light is going to have to accept the reality of infections at the speed of light. Unless security is designed into the system from the bottom up, we're constantly going to be swimming against a strong tide.

    Bruce Schneier is President of Counterpane Systems. Phone: 612-823-1098
    See http://www.counterpane.com
    101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
    Free crypto newsletter. See: http://www.counterpane.com

    =======================================================

    Inside Risks 110, CACM 42, 8, August 1999

    Biometrics: Uses and Abuses

    Bruce Schneier

    Biometrics are seductive. Your voiceprint unlocks the door of your house. Your iris scan lets you into the corporate offices. You are your own key. Unfortunately, the reality isn't that simple.

    Biometrics are the oldest form of identification. Dogs have distinctive barks. Cats spray. Humans recognize faces. On the telephone, your voice identifies you. Your signature identifies you as the person who signed a contract.

    In order to be useful, biometrics must be stored in a database. Alice's voice biometric works only if you recognize her voice; it won't help if she is a stranger. You can verify a signature only if you recognize it. To solve this problem, banks keep signature cards. Alice signs her name on a card when she opens the account, and the bank can verify Alice's signature against the stored signature to ensure that the check was signed by Alice.

    There is a variety of different biometrics.In addition to the three mentioned above, there are hand geometry, fingerprints, iris scans, DNA, typing patterns, signature geometry (not just the look of the signature, but the pen pressure, signature speed, etc.). The technologies are different, some are more reliable, and they'll all improve with time.

    Biometrics are hard to forge: it's hard to put a false fingerprint on your finger, or make your iris look like someone else's. Some people can mimic others' voices, and Hollywood can make people's faces look like someone else, but these are specialized or expensive skills. When you see someone sign his name, you generally know it is he and not someone else.

    On the other hand, some biometrics are easy to steal. Imagine a remote system that uses face recognition as a biometric. ``In order to gain authorization, take a Polaroid picture of yourself and mail it in. We'll compare the picture with the one we have in file.'' What are the attacks here?

    Take a Polaroid picture of Alice when she's not looking. Then, at some later date, mail it in and fool the system. The attack works because while it is hard to make your face look like Alice's, it's easy to get a picture of Alice's face. And since the system does not verify when and where the picture was taken--only that it matches the picture of Alice's face on file--we can fool it.

    A keyboard fingerprint reader can be similar. If the verification takes place across a network, the system may be unsecure. An attacker won't try to forge Alice's real thumb, but will instead try to inject her digital thumbprint into the communications.

    The moral is that biometrics work well only if the verifier can verify two things: one, that the biometric came from the person at the time of verification, and two, that the biometric matches the master biometric on file. If the system can't do that, it can't work. Biometrics are unique identifiers, but they are not secrets. You leave your fingerprints on everything you touch, and your iris patterns can be observed anywhere you look.

    Biometrics also don't handle failure well. Imagine that Alice is using her thumbprint as a biometric, and someone steals the digital file. Now what? This isn't a digital certificate, where some trusted third party can issue her another one. This is her thumb. She has only two. Once someone steals your biometric, it remains stolen for life; there's no getting back to a secure situation.

    And biometrics are necessarily common across different functions. Just as you should never use the same password on two different systems, the same encryption key should not be used for two different applications. If my fingerprint is used to start my car, unlock my medical records, and read my electronic mail, then it's not hard to imagine some very unsecure situations arising.

    Biometrics are powerful and useful, but they are not keys. They are not useful when you need the characteristics of a key: secrecy, randomness, the ability to update or destroy. They are useful as a replacement for a PIN, or a replacement for a signature (which is also a biometric). They can sometimes be used as passwords: a user can't choose a weak biometric in the same way they choose a weak password.

    Biometrics are useful in situations where the connection from the reader to the verifier is secure: a biometric unlocks a key stored locally on a PCM-CIA card, or unlocks a key used to secure a hard drive. In those cases, all you really need is a unique hard-to-forge identifier. But always keep in mind that biometrics are not secrets.

    Bruce Schneier is President of Counterpane Systems. You can subscribe to his free security newsletter, CRYPTO-GRAM, at http://www.counterpane.com.

    =======================================================

    Inside Risks 109, CACM 42, 7, July 1999

    Information is a Double-Edged Sword

    Peter G. Neumann

    As we begin the tenth year of this monthly column, it seems eminently clear that information technology has enormous benefits, but that it can also be put to undesirable use. Market forces have produced many wonderful products and services, but they do not ensure beneficial results. Many systems are technologically incapable of adequately supporting society-critical uses, and may further handicap the disadvantaged. Good education and altruism are helpful, whereas legislation and other forms of regulation have been less successful. Ultimately, we are all responsible for realistically assessing risks and acting accordingly.

    The rapidily expanding computer-communication age is bringing with it enormous new opportunities that in many ways outpace the agrarian and industrial revolutions that preceded it. As with any technology, the potentials for significant social advances are countered with serious risks of misuse -- including over-agressive surveillance. Here are four currently relevant examples.

    1. Satellite technology makes possible an amazingly detailed and up-to-date picture of what is going on almost everywhere on the planet, ostensibly for the benefit of mankind. However, until now most applications of the imagery have been for military purposes, with a lurking fear by U.S. Department of Defense that the same technology could be used against it. In 1994, the U.S. Government seemingly relaxed its controls, approving a private satellite to be launched by a company called Space Imaging -- which expects that its clients would use its information for urban planning, environmental monitoring, mapping, assessing natural disasters, resource exploration, and other benevolent purposes. This opportunity may lead to renewed efforts to restrict the available content -- what can be monitored, where, when, and by whom -- because of the risks of misuse. In the long run, there are likely to be many such private satellites. (Unfortunately, the first such satellite Ikonos 1, with one-square-meter resolution, disappeared from contact 8 minutes after launch on April 27, 1999, although we presume Space Imaging will try again.)

    2. The Internet has opened up unprecedented new opportunities. But it is also blamed for pornography, bomb-making recipes, hate-group literature, the Littleton massacre, spamming, and fraud. Consequently, there are ongoing attempts to control its use -- especially in repressive nations, but even in some local constituencies that seek easy technological answers to complex social problems. In the long run, there are likely to be many private networks. However, as long as they are implemented with flaky technology and are coupled to the Internet, their controls will tend to be ineffective. Besides, most controls on content are misguided and incapable of solving the problems that they are attempting to solve.

    3. Computer systems themselves have created hitherto unbelievable advances in almost every discipline. Readers of this column realize the extent to which the risks to the public inherent in computer technologies must also be kept in mind, especially those involving people (designers, purveyors, users, administrators, government officials, etc.) who were not adequately aware of the risks. Furthermore, computers can clearly be used for evil purposes, which again suggests to some people restrictions on who can have advanced computers. In the long run, such controls seem unrealistic.

    4. Good cryptography that is well implemented can facilitate electronic commerce, nonspoofable private communications, meaningful authentication, and the salvation of oppressed inviduals in times of crisis. It can of course also be used to hide criminal or otherwise antisocial behavior -- which has led to attempts by governments to control its spread. However, obvious risks exist with the use of weak crypto that can be easily and rapidly broken. The French government seems to have reversed its course, realizing that its own national well-being is dependent on the use of strong cryptography that is securely implemented, with no trapdoors. In the long run, there is likely to be a plethora of good cryptography freely available worldwide, which suggests that law enforcement and national intelligence gathering need to seek other alternatives than export controls and surreptitiously exploitable trap-doored crypto.

    In attempting to control societal behavior, there are always serious risks of overreacting. About 100 years ago, the Justice Department reportedly proposed in all seriousness that the general public should not be permitted to have automobiles -- which would allow criminals to escape from the scene of a crime. Some of that mentality is still around today. However, the solutions must lie elsewhere. Let's not bash the Internet and computers for the ways in which they can be used. Remember that technology is a double-edged sword, and that the handle is also a weapon.

    See the archives of the online Risks Forum (comp.risks) at http://catless.ncl.ac.uk/Risks/, and the current index at http://www.csl.sri.com/neumann/illustrative.html, as well as Peter Neumann's ACM Press/Addison-Wesley book, Computer-Related Risks, for myriad cases of information systems and people whose behavior was other than what was expected -- and what might be done about it.

    =======================================================

    Inside Risks 108, CACM 42, 6, June 1999

    Risks of Y2K

    Peter G. Neumann and Declan McCullagh

    As we approach January 1, 2000, it's time to review what progress is being made and what risks remain. Our conclusion: Considerable uncertainty continues; optimists predict only minor problemsm and pessimists claim that the effects will be far-reaching. The uncertainty is itself unsettling.

    Y2K fixes seem to have accelerated in the months since the Inside Risks column last September. For example, most U.S. Government agencies and departments claim they have advanced significantly in the past year, with some notable exceptions; see http//www.house.gov/reform/gmit and late-breaking worries (such as the Veterans Administration). However, some agencies have weakened their definitions of which systems are critical, and government auditors warn that the success rates are based on self-reported data.

    The U.S. Government has recently been exuding a reverse-spin air of confidence, perhaps in an attempt to stave off panic. However, many states, local governments, and other countries are lagging. International reliance on unprepared nations is a serious cause for concern. Some vendor software is yet to be upgraded. Although many systems may appear to work in isolation, they depend on computer infrastructures (such as routers, telecommunications, and power), which must also be Y2K-proof. The uncertainty that results from the inherent incompleteness of local testing is also a huge factor. Cynics might even suggest that the federal government's stay-calm message is misleading, because there is no uniform definition of compliance, no uniform definition of testing, and little independent validation and verification. And then there are desires for legislating absolution from Y2K liability.

    There is a real risk of popular overreaction. One of the strangest risks is the possibility of widespread panic inspired by people who fear the worst, even if the technology works perfectly. Many people are already stockpiling cash, food supplies, fuel, even guns. Bulk food companies and firearm manufacturers report record sales. Some Government officials fear that accelerated purchases in 1999 and reduced demand in early 2000 could spark a classic inventory recession.

    There is also a potential risk of government overreaction. As far back as June 1998, Robert Bennett, the Utah Republican who chairs the U.S. Senate's Y2K committee, asked what plans the Pentagon has ``in the event of a Y2K-induced breakdown of community services that might call for martial law." Y2K fears prompted city officials in Norfolk, Nebraska to divert funds from a new mug-shot system to night-vision scopes, flashlights for assault rifles, gas masks, and riot gear. The Federal Emergency Management Agency and the Canadian government will have joint military-civilian forces on alert by late December. For the first time since the end of the Cold War, a Cabinet task force is devising emergency disaster responses, and thus some concerns about potentially draconian Government measures arise. Senators Frank Church and Charles McMathias wisely pointed out in a 1973 report that emergency powers ``remain a potential source of virtually unlimited power for a President should he choose to activate them.''

    There is also a risk of underreaction and underpreparation. Sensibly anticipating something like a bad earthquake or massive hurricane seems prudent. Some people have lived without electricity for prolonged periods of time, for example, for six weeks in Quebec two winters ago. Water also is a precious resource, as a million Quebecois who were nearly evacuated learned. However, fundamental differences exist between Y2K preparedness and hurricane preparedness. The Y2K transition will occur worldwide (and even in space). Hurricanes and tornados are localized, and experience over many years has given us a reasonably accurate picture of the extent of what typically happens. But we have little past experience with Y2K-like transitions.

    It is not uncommon for officials to assure the public that things are under control. People look to leaders for reassurance, and this is a natural response. Under normal circumstances, such statements are no more disturbing than any other law or regulation. However, calling out troops and declaring a national emergency are plans that deserve additional scrutiny and public debate. In a worst-case scenario of looting and civil unrest, the involvement of the military in urban areas could extend to martial law, the suspension of due-process rights, and seizures of industrial or personal property. U.S. Defense Department regulations let the military restore ``public order when sudden and unexpected civil disturbances, disaster, or calamities seriously endanger life and property and disrupt normal governmental functions."

    It might be more reassuring if discussions were happening in public -- but some critical meetings happen behind closed doors. Increasingly, legislators are discussing details about Y2K only in classified sessions, and a new law that had overwhelming bipartisan support in Congress bars the public from attending meetings of the White House's Y2K council. A partial antidote for uncertainty is the usual one: increased openness and objective scrutiny. U.S. Supreme Court Justice Louis Brandeis said it well: ``Sunlight is the best disinfectant."

    Declan McCullagh is the Washington bureau chief for Wired News. He writes frequently about Y2K. PGN is PGN.

    =======================================================

    Inside Risks 107, CACM 42, 5, May 1999

    Ten Myths about Y2K Inspections

    David Lorge Parnas

    As I write this, the alarmist reports about Y2K are being replaced with more comforting statements. Repeatedly, I hear, ``We have met the enemy and fixed the bugs." I would find such statements comforting if I had not heard them before when they were untrue. How often have you seen a product, presumably well-tested, sent to users full of errors? By some estimates, 70% of first fixes are not correct. Why should these fixes, made to old code by programmers who are not familiar with the systems, have a better success rate?

    The Y2K mistake would never have been made if programmers had been properly prepared for their profession. There were many ways to avoid the problem without using more memory. Some of these were taught 30 years ago and are included in software design textbooks. The programmers who wrote this code do not have my confidence, but we are now putting a lot of faith in many of the same programmers. Have they been re-educated? Are they now properly prepared to fix the bugs or to know if they have fixed them? In discussing this problem with a variety of programmers and engineers, I have heard a few statements that strike me as unprofessional ``urban folklore". These statements are false, but I have heard each of them used to declare victory over a Y2K problem.

    Myth 1: ``Y2K is a software problem. If the hardware is not programmable, there is no problem."

    Obviously, hardware that stores dates can have the same problems.

    Myth 2: If the system does not have a real-time clock, there is no Y2K problem.

    Systems that simply relay a date from one system with a clock to other systems can have problems. Myth 3: If the system does not have a battery to maintain date/time during a power outage there can be no Y2K problem.

    Date information may enter the system from other sources and cause problems.

    Myth 4: If the software does not process dates, there can be no Y2K problem.

    The software may depend for data on software that does process dates such as the operating system or software in another computer.

    Myth 5: Software that does not need to process dates is ``immune" to Y2K problems.

    Software obtained by ``software re-use" may process dates even though it need not do so.

    Myth 6: Systems can be tested one-at-a-time by specialized teams. If each system is fixed, the combined systems will work correctly.

    It is possible to fix two communicating systems for Y2K so that each works but they are not compatible. Many of the fixes today simply move the 100 year window. Not only will the problem reappear when people are even less familiar with the code, two systems that have been fixed in this way may not be compatible when they communicate. Where two such systems communicate, each may pass tests with flying colors, but ...

    Myth 7: If no date dependent data flows in or out of a system while it is running, there is no problem.

    Date information may enter the system on an EPROMs, diskettes, etc. during a build.

    Myth 8: Date stamps in files don't matter.

    Some of the software in the system may process the date stamps, e.g. to make sure that the latest version of a module is being used, when doing backups, etc.

    Myth 9: Planned testing, using ``critical dates" is adequate.

    As Harlan Mills used to say, ``Planned testing is a source of anecdotes, not data". Programmers who overlook a situation or event may also fail to test it.

    Myth 10: You can rely on keyword scan lists.

    Companies are assembling long lists of words that may be used as identifiers for date-dependent data. They seem to be built on the assumption that programmers are monolingual English speakers who never misspell a word.

    As long as I hear such statements from those who are claiming victory over Y2K, I remain concerned. I was a sceptic when the gurus were predicting disaster, and I remain a sceptic now that they are claiming success.

    David L. Parnas, P.Eng., holds the NSERC/Bell Industrial Research Chair in Software Engineering, and is Director of the Software Engineering Programme in the Department of Computing and Software at McMaster University, Hamilton, Ontario, Canada - L8S 4L7.

    =======================================================

    Inside Risks 106, CACM 42, 4, Apr 1999

    A Matter of Bandwidth

    Lauren Weinstein

    The previously incomprehensible increases in communication capacities now appearing almost daily may now be enabling a quantum leap in one of the ultimately most promising, yet underfunded, areas of scientific research--teleportation. But to an extent even greater than with many other facets of technology, funding shortfalls in this area can carry with them serious risks to life, limb, and various other useful body parts.

    Teleportation, also known as matter transmission (MT), has a long history of experimentation, largely by independent researchers (use of pejorative terms such as ``mad scientists'' in reference to these brilliant early innovators is usually both unwarranted and unfair). Their pioneering work established the theoretical underpinnings for matter transmission, and also quickly illustrated the formidable hurdles and risks associated with the practical implementation of teleportation systems.

    Early studies suggested that physical matter could be teleported between disparate spatial locations through mechanisms such as enhanced quantum probability displacement, matter-energy scrambling, or artificial wormholes. Unfortunately, these techniques proved difficult to control precisely and had unintended side-effects (see Distant Galactic Detonations from Unbalanced Space-Time MT Injection Nodes, Exeter and Meacham, 1954).

    During this period, a major teleportation system risk factor relating to portal environmental controls was first clearly delineated, in the now classic work by the late Canadian MT researcher Andre' Delambre ( Pest Control of Airborne Insects in Avoidance of MT Matrix Reassembly Errors, 1958), later popularized as the film ``The Fly and I'' (1975).

    Problems such as these led to the development of the MT technology still currently considered to be the most promising, officially referred to as ``Matter Displacement via Dedicated Transmission, Replication, and Dissolution,'' but more commonly known as ``Copy, Send, and Burn.'' In this technique, an exact scan of the transmission object (ranging from an inorganic item to a human subject) records all aspects of that object to the subatomic level, including all particle positions and charges. The amount of data generated by this process is vast, so data compression techniques are often applied at this stage (however, ``lossy'' compression algorithms are to be avoided in MT applications, particularly when teleporting organic materials).

    Next, the data is transmitted to the distant target point for reassembly, where an exact duplicate of the original object is recreated from locally available carbon-based or other molecular materials (barbecue charcoal briquettes have often been used as an MT reconstruction source matrix with reasonably good results).

    After verifying successful reconstruction at the target location, the final step is to disintegrate the original object, leaving only the newly assembled duplicate, which is completely indistinguishable from the original in all respects. It is strongly recommended that the verification step not be shortcut in any manner. Attempts to use various cyclic-redundancy checks, Reed-Solomon coding, and other alternatives to (admittedly time-consuming) bit-for-bit verification of the reassembled objects have yielded some unfortunate situations, several of which have become all too familiar through tabloid articles. Some early MT researchers had advocated omission of the final ``dissolution'' step in the teleportation process, citing various metaphysical concerns. However, the importance of avoiding the long-term continuance of both the source and target objects was clearly underscored in the infamous ``Thousand Clowns'' incident at the Bent Fork National Laboratory in 1979. For similar reasons, use of multicast protocols for teleportation is contraindicated except in highly specialized (and mostly classified) environments.

    The enormous amounts of data involved with MT have always made the availability and cost of transmission bandwidth a severe limiting factor. But super-capacity single and multimode fiber systems, the presence of higher speed routers, and other developments, have rendered these limitations nearly obsolete.

    There are still serious concerns, of course. It is now assumed that Internet-based TCP/IP protocols will be used for most MT applications, the protests of the X.400 Teleportation Study Committee notwithstanding. Protocol design is critical. Packet fragmentation can seriously degrade MT performance parameters, and UDP protocols are not recommended except where robust error correction and retransmission processes are in place. Incidents such as running out of disk spool space or poor backup procedures are intolerable in production teleportation networks. The impact of web ``mirror sites'' on MT operational characteristics is still a subject of heated debate.

    We've come a long way since the early MT days where 300-bps 103-type modems would have required centuries to transmit a cotton swab between two locations. With the communications advances now at our disposal, it appears likely that, so long as we take due consideration of the significant risks involved, the promise of practical teleportation may soon be only a phone call away.

    Lauren Weinstein (lauren@vortex.com) of Vortex Technology (http://www.vortex.com) is the Moderator of the PRIVACY Forum. He avoids being a teleportation test subject.

    =======================================================

    Inside Risks 105, CACM 42, 3, Mar 1999

    Bit-Rot Roulette

    Lauren Weinstein

    It's obvious that our modern society is becoming immensely dependent on stored digital information, a trend that will only increase dramatically. Ever more aspects of our culture that have routinely been preserved in one or another analog form are making transitions into the digital arena. Consumers who have little or no technical expertise are now using digital systems as replacements for all manner of traditionally analog storage. Film-based snapshots are replaced by digital image files. Financial records move from the file cabinet to the PC file system.

    But whereas we now have long experience with the storage characteristics, lifetimes, and failure modes of traditional media such as newsprint, analog magnetic tape, and film stock, such is not the case with the dizzying array of new digital storage technologies that seem to burst upon the scene at an ever increasing pace. How long will the information we entrust to these systems really be safe and retrievable in a practical manner? Do the consumer users of these systems understand their real-world limitations and requirements?

    From magnetic disks to CDs, from DVD-ROMs to high density digital tape, we're faced with the use of media whose long-term reliability can be estimated only through the use of accelerated testing methodologies, themselves often of questionable reliability. And before we've even had a chance to really understand one of these new systems, it's been rendered obsolete by the next generation with even higher densities and speeds.

    Even if we assume the physical media themselves to be reasonably stable over time, the availability of necessary hardware and software to retrieve information from media that are no longer considered ``current'' can be very difficult to assure. Have you tried to get a file from an 8-inch CP/M floppy recently? There are already CD-ROMs that are very difficult to read because the necessary operating system support is obsolete and largely unavailable.

    Of course technology marches onward, and the capabilities of the new systems to store ever-increasing amounts of data in less and less space is truly remarkable. A big advantage of digital systems is that it's possible, at least theoretically, to copy materials to newer formats as many times as necessary, without change or loss of data -- a sort of digital immortality.

    But such a scenario works only if the users of the systems have the technical capability to make such copies, and an understanding of the need to do so on an ongoing basis. While it can be argued that the ultimate responsibility for keeping tabs on data integrity and retrievability rests on the shoulders of the user, there has been vastly insufficient effort by the computer industry to educate consumers regarding the realities of these technologies.

    Another issue is that when a digital medium fails, it frequently does so catastrophically. The odds of retrieving usable audio from a 40-year-old 1/4 inch analog magnetic tape is sometimes far higher than for a DAT (Digital Audio Tape) only a few years old stored under suboptimal conditions. Digital systems have immense capacities, but their tight tolerances present new vulnerabilities as well, which need to be understood by their often mostly non-technical users.

    Many consumers who are now storing their important data in digital form are completely oblivious to the risks. Many don't even do any routine backups, and ever increasing disk capacities have tended to exacerbate this trend. The belief that ``if it's digital, it's reliable'' is taken as an article of faith -- an attitude reinforced by advertising mantras.

    We need to appreciate the viewpoint of the increasing number of persons who treat PCs as if they were toasters. The design of OS and application software systems doesn't necessarily help matters. Even moving files from an old PC to a new one can be a mess for the average consumer under the popular OS environments. Many manufacturers quickly cease fixing bugs in hardware drivers and the like after only a few years. It's almost as if they expect consumers to simply throw out everything and start from scratch every time they upgrade. The technical support solution of ``reinstall everything from the original installation disk'' is another indication of the ``disposable'' attitude present in some quarters of the industry.

    If we expect consumers to have faith in digital products, there must be a concerted effort to understand consumer needs and capabilities. Hardware and software systems must be designed with due consideration to backwards compatibility, reliability, and long-term usability by the public at large. Marketing hype must not be a substitute for honest explanations of the characteristics of these systems and their proper use. Failure in this regard puts at risk the good will of the consumers who hold the ultimate power to control the directions that digital technology will be taking into the future.

    Lauren Weinstein (lauren@vortex.com) of Vortex Technology (http://www.vortex.com) is the Moderator of the PRIVACY Forum.

    =======================================================

    Inside Risks 104, CACM 42, 2, Feb 1999

    Robust Open-Source Software

    Peter G. Neumann

    Closed-source proprietary software, which is seemingly the lifeblood of computer system entrepreneurs, tends to have associated risks:

    * Unavailability of source code reduces on-site adaptability and repairability.

    * Inscrutability of code prohibits open peer analysis (which otherwise might improve reliability and security), and masks the reality that state-of-the-art development methods do not produce adequately robust systems.

    * Lack of interoperability and composability often induces inflexible monolithic solutions.

    * Where software bloat exists, it often hinders subsetting.

    A well-known (but certainly not the only) illustration of these risk factors is Windows NT 5.0. It reportedly will have 48 million lines of source code in the kernel alone, plus 7.5 million lines of associated test code. Unfortunately, the code on which security, reliability, and survivability of system applications depend is essentially all 48M lines plus application code. (Recall the divide-by-zero in an NT application that brought the Yorktown Aegis missile cruiser to a halt: RISKS, vol. 19, no. 88.) In critical applications, an enormous amount of untrustworthy code may have to be taken on faith.

    Open-source software offers an opportunity to surmount these risks of proprietary software. ``Open Source'' is registered as a certification mark, subject to the conditions of The Open Source Definition http://www.opensource.org/osd.html, which has various explicit requirements: unrestricted redistribution; distributability of source code; permission for derived works; constraints on integrity; nondiscriminatory practices regarding individuals, groups, and fields of endeavor; transitive licensing of rights; context-free licensing; and noncontamination of associated software. For background, see the opensource.org Website, which cites Gnu GPL, BSD Unix, the X Consortium, MPL, and QPL as conformant examples. Additional useful sources include the Free Software Foundation (http://www.gnu.org). The Netscape browser (an example of open, but proprietary software), Perl, Bind, the Gnu system with Linux, Gnu Emacs, Gnu C, GCC, etc., are further examples of what can be done. Also, Diffie-Hellman is now in the public domain.

    In many critical applications, we desperately need operating systems and applications that are meaningfully robust, where ``robust'' is an intentionally inclusive term embracing meaningful security, reliability, availability, and system survivability, in the face of a wide and realistic range of potential adversities -- which might in some cases include hardware faults, software flaws, malicious and accidental exploitation of systemic vulnerabilities, environmental hazards, unfortunate animal behaviors, etc.

    We need significant improvements on today's software, both open-source and proprietary, in order to overcome myriad risks (see the RISKS archives (http://catless.ncl.ac.uk/Risks/) or my Illustrative Risks document (http://www.csl.sri.com/~neumann/). When commercial systems are not adequately robust, we should consider how sound open-source components might be composed into demonstrably robust systems. This requires an international collaborative process, open-ended, long-term, far-sighted, somewhat altruistic, incremental, and with diverse participants from different disciplines and past experiences. Pervasive adherence to good development practice is also necessary (which suggests better teaching as well). The process also needs some discipline, in order to avoid rampant proliferation of incompatible variants. Fortunately, there are already some very substantive efforts to develop, maintain, and support open-source software systems, with significant momentum. If those efforts can succeed in producing demonstrably robust systems, they will also provide an incentive for better commercial systems.

    We need techniques that augment the robustness of less robust components, public-key authentication, cryptographic integrity seals, good cryptography, trustworthy distribution paths, trustworthy descriptions of the provenance of individual components and who has modified them. We need detailed evaluations of components and the effects of their composition (with interesting opportunities for formal methods). Many problems must be overcome, including defenses against Trojan horses hidden in systems, compilers, evaluation tools, etc. -- especially when perpetrated by insiders. We need providers who give real support; warranties on systems today are mostly very weak. We need serious incentives including funding for robust open-source efforts. Despite all the challenges, the potential benefits of robust open-source software are worthy of considerable collaborative effort.

    ========================================================

    Inside Risks 103, Comm. ACM 42, 1, Jan 1999

    Our Evolving Public Telephone Networks

    Fred B. Schneider and Steven M. Bellovin

    The public telephone network (PTN) in the U.S. is changing---partly in response to changes in technology and partly due to deregulation. Some changes are for the better: lower prices with more choices and services for consumers. But there are other consequences and, in some ways, PTN trustworthiness is eroding. Moreover, this erosion can have far-reaching consequences. Critical infrastructures and other networked information systems rely today on the PTN and will do so for the foreseeable future.

    Prior to the 1970's, most of the U.S. telephone network was run by one company, AT&T. AT&T built and operated a network with considerable reserve capacity and geographically diverse, redundant routings, often at the explicit request of the federal government. Many telephone companies compete in today's market. So cost presures have become more pronounced. Reserve capacity and rarely-needed emergency systems are now sacrificed on the altar of cost. And new dependencies---hence, new vulnerabilities---are introduced because some services are being imported from other producers.

    Desire to attract and retain market share has led telephone companies to introduce new features and services. Some new functionality (such as voice menus within the PTN) relies on call-translation databases and programmable adjunct processors, which introduce new points of access and, therefore, new points of vulnerability. Other new functionality is intrinsically vulnerable. CallerID, for example, is increasingly used by PTN customers, even though the underlying telephone network is unable to provide such information with a high degree of assurance. Finally, new functionality leads to more-complex systems, which are liable to behave in unexpected and undesirable ways.

    You might expect that having many phone companies would increase the capacity and diversity of the PTN. It does, but not as much as one would hope. To lower their own capital costs, telephone companies lease circuits from each other. Now, a single errant backhoe can knock out service from several different companies. And there is no increase in diversity for the consumer who buys service from many providers. Furthermore, the explicit purchase of diverse routes is more difficult to orchestrate when different companies must cooperate.

    In addition, the need for the many phone companies to interoperate has itself increased PTN complexity. Second, competition for local phone service has necessitated creating databases (updated by many different telephone companies) that must be consulted in processing each call, to determine which local phone company serves that destination.

    The increased number of telephone companies along with an increased multiplexing of physical resources has other repercussions. The cross connects and multiplexors used to route calls depend on software running in operations support systems (OSSs). But information about OSSs is becoming less proprietary, since today virtually anybody can form a telephone company. The vulnerabilities of OSSs are thus accessible to ever larger numbers of attackers. Similarly, the SS7 network used for communication between central office switches was designed for a small, closed community of telephone companies; deregulation thus increases the opportunities for insider attacks (because anyone can become an insider by becoming a telephone company). Security by obscurity is not the solution: network components must be redesigned to provide more security in this new environment.

    To limit outages, telephone companies have turned to newer technologies. Synchronous Optical Network (SONET) rings, for example, allow calls to continue when a fiber is severed. But despite the increased robustness provided by SONET rings, using high-capacity fiber optic cables leads to greater concentrations of bandwidth over fewer paths, for economic reasons. Failure (or sabotage) of a single link is thus likely to disrupt service for many customers---particularly worrisome, because the single biggest cause of telephone outages is cable cuts.

    Today's telephone switches---crucial components of the PTN---are quite reliable. Indeed, a recent National Security Telecommunications Advisory Committee study found that procedural errors, hardware faults, and software bugs were roughly equal in magnitude as causes of switch outages. Reducing software failure to the level of hardware failures is an impressive achievement. But switch vendors are coming under considerable competitive pressure, and they, too, are striving to reduce costs and develop features more rapidly, which could make matters worse.

    Fred B. Schneider (Cornell University) and Steven M. Bellovin (AT&T Labs Research) served on the NRC Computer Science and Telecommunications Board committee that authored Trust in Cyberspace. Chapter 2 of that report (see www2.nas.edu/cstbweb/index.html) discusses the eroding trustworthiness of the PTN. See Comm. ACM 41, 11, 144, November 1998 for a summary of the report.

    ========================================================

    Inside Risks 102, (Comm.ACM 41, 12, December 1998)

    The Risks of Hubris

    Peter B. Ladkin

    Hubris is risky: a tautologous claim. But how to recognize it? Phaethon, the human child of Phoebus the sun god, fed up with being ridiculed, visited his father to prove his progeniture. Happy to see his son, Phoebus granted him one request. Phaethon chose to drive the sun-chariot. In Ted Hughes' vivid rendering of Ovid, Phoebus, aghast, warns him

    ... Be persuaded
    The danger of what you ask is infinite --
    To yourself, to the whole creation.
    ...
    You are my son, but mortal. No mortal
    Could hope to manage those reins.
    Not even the gods are allowed to touch them.

    He admonished Phaethon to ... avoid careening
    Over the whole five zones of heaven.

    The chariot set off, Phaethon lost control, scorched the earth and ruined the whole day.

    Was that vehicle safe? The sun continues to rise each day, so I guess in Phoebus's hands it is, and in Phaethon's it isn't. The two contrary answers give us a clue that the question was misplaced: safety cannot be a property of the vehicle alone. To proclaim the system `safe', we must include the driver, and the pathway travelled. Consider the Space Shuttle. Diane Vaughan pointed out in The Challenger Launch Decision that it flies with aid of an extraordinary organization devised to reiterate the safety and readiness case in detail for each mission. Without this organization, few doubt there would be more failures. Safety involves human affairs as well as hardware and software.

    If NASA would be Phoebus, who would be Phaethon? Consider some opportunities.

    A car company boasts that their new product has more computational power than was needed to take Apollo to the moon. (Programmers of a different generation would be embarrassed by that admission.) We may infer that high performance, physical or digital, sells cars. Should crashworthiness, physical and digital?

    Safe flight is impossible in clouds or at night without reliable information on attitude, altitude, speed, and position. Commercial aircraft nowadays have electronic displays, which systems are not considered `safety-critical'. Should we have expected the recent reports of loss of one or both displays, including at least two accidents? This failure mode did not occur with non-electronic displays.

    In 1993, Airbus noted that the amount of airborne software in new aircraft doubled every two years (2MLOC for the A310, 1983; 4M for the A320, 1988; 10M for the A330/340, 1992). Has the ability to construct adequate software safety increased by similar exponential leaps? One method, extrapolation from the reliability of previous versions, does not apply: calculations show that testing or experience cannot increase one's confidence to the high level required. If not by this method, then how?

    If there's a deadly sin of safety-critical computing, Hubris must be one. But suppose we get away with it. What then? In Design Paradigms, Henry Petroski reports a study suggesting that the first failure of a new bridge type seems to occur some 30 years after its successful introduction. He offers thereby the second sin, Complacency. It is hard to resist suggesting a first axiom of safety-critical sinning --

    Hubris & negation of Failure "leadsto" Complacency

    where "leadsto" is the temporal LEADSTO operator (depicted as a squiggly line). (Compare Vaughan's starker concept normalization of deviance.)

    Why might engineers used to modern logic look at such classical themes? Consider what happened to Phaethon. Jove, the lawmaker, acted:

    With a splitting crack of thunder he lifted a bolt,
    Poised it by his ear,
    Then drove the barbed flash point-blank into Phaethon.
    The explosion
    Snuffed the ball of flame
    As it blew the chariot to fragments. Phaethon
    Went spinning out of his life.

    Then as now, although more the thousand cuts than the thunderbolt. Pursuant to an accident, Boeing is involved in legal proceedings concerning, amongst other things, error messages displayed by its B757 on-board monitoring systems; Airbus is similarly involved in Japan concerning a specific design feature of its A300 autopilot. Whatever the merits of so proceeding, detailed technical design is coming under increasing legal scrutiny.

    But what should we have expected? Recall: safety involves human affairs, of which the law is an instrument. This much hasn't changed since Ovid. To imagine otherwise was, perhaps, pure Hubris.

    Peter Ladkin (ladkin@rvs.uni-bielefeld.de) is a professor at the University of Bielefeld, Germany, and a founder of Causalis Limited. Ted Hughes' Tales from Ovid is published by Faber and Faber.

    =========================================================

    Inside Risks 101, CACM 41, 11, November 1998)

    Towards Trustworthy Networked Information Systems

    Fred B. Schneider

    When today's networked information systems (NISs) perform badly or don't work at all, lives, liberty, and property can be put at risk [1]. Interrupting service can threaten lives and property; destroying information or changing it improperly can disrupt the work of governments and corporations; disclosing secrets can embarrass people or harm organizations.

    For us---as individuals or a nation---to become dependent on NISs, we will want them to be trustworthy. That is, we will want them to be designed and implemented so that not only do they work but also we have a basis to believe that they will work, despite environmental disruption, human user and operator errors, and attacks by hostile parties. Design and implementation errors must be avoided, eliminated, or the system must somehow compensate for them.

    Today's NISs are not very trustworthy. A recent National Research Council CSTB study [2] investigated why and what can be done about it, observing:

    * Little is known about the primary causes of NIS outages today or about how that might change in the future. Moreover, few people are likely to understand an entire NIS much less have an opportunity to study several, and consequently there is remarkably poor understanding of what engineering practices actually contribute to NIS trustworthiness.

    * Available knowledge and technologies for improving trustworthiness are limited and not widely deployed. Creating a broader range of choices and more robust tools for building trustworthy NISs is essential.

    The study offers a detailed research agenda with hopes of advancing the current discussions about critical infrastructure protection from matters of policy, procedure, and consequences of vulnerabilities towards questions about the science and technology needed for implementing more-trustworthy NISs.

    Why is it so difficult to build a trustworthy NIS? Beyond well known (to RISKS readers) difficulties associated with building any large computing system, there are problems specifically associated with satisfying trustworthiness requirements. First, the transformation of informal characterizations of system-level trustworthiness requirements into precise requirements that can be imposed on system components is beyond the current state of the art. Second, employing ``separation of concerns'' and using only trustworthy components are not sufficient for building a trustworthy NIS---interconnections and interactions of components play a significant role in NIS trustworthiness.

    One might be tempted to employ ``separation of concerns'' and hope to treat each of the aspects of trustworthiness (e.g., security, reliability, ease of use) in isolation. But the aspects interact, and care must be taken to ensure that one is not satisfied at the expense of another. Replication of components, for example, can enhance reliability but may complicate the operation of the system (ease of use) and may increase exposure to attack (security) due to the larger number of sites and the vulnerabilities implicit in the protocols to coordinate them. Thus, research aimed at enhancing specific individual aspects of trustworthiness courts irrelevance. And research that is bound by existing subfield demarcations can actually contribute more to the trustworthiness problem than to its solution.

    Economics dictates the use of commercial off the shelf (COTS) components wherever possible in building an NIS, which means that system developers have neither control nor detailed information about many of their system's components. Economics also increasingly dictates the use of system components whose functionality can be changed remotely while the system is running. These trends create needs for new science and technology. For example, the substantial COTS makeup of an NIS, the use of extensible components, the expectation of growth by accretion, and the likely absence of centralized control, trust, or authority, demand a new look at security: risk mitigation rather than risk avoidance, add-on technologies and defense in depth, and relocation of vulnerabilities rather than their elimination.

    Today's systems could surely be improved by using what is already known. But, according to the CSTB study, doing only that will not be enough. We lack the necessary science and technology base for building NISs that are sufficiently trustworthy for controlling critical infrastructures. Therefore, the message of the CSTB study is a research agenda and technical justifications for studying those topics.

    1. P.G. Neumann, Protecting the Infrastructures, Comm.ACM 41, 1, 128 (Inside Risks) gives a summary of the President's Commission on Critical Infrastructure Protection (PCCIP) report, which discusses the dependence of communication, finance, energy distribution, and transportation on NISs.

    2. Trust in Cyberspace, the final report for the Information Systems Trustworthiness study by the Computer Science and Telecommunications Board (CSTB) of the National Research Council, can be accessed at http://www2.nas.edu/cstbweb/index.html.

    Cornell University Professor Fred B. Schneider chaired the CSTB study discussed in this column.

    =====================================================

    Inside Risks 100, CACM 41, 10, October 1998)

    Risks of E-Education

    Peter G. Neumann

    Some universities and other institutions are offering or contemplating courses to be taken remotely via Internet, including a few with degree programs. There are many potential benefits: teachers can reuse collaboratively prepared course materials; students can schedule their studies at their own convenience, and employees can participate in selected subunits for refreshers; society might benefit from an overall increase in literacy -- and perhaps even computer literacy. On-line education inherits many of the advantages and disadvantages of textbooks and conventional teaching, but also introduces some of its own.

    People involved in course preparation quickly discover that creating high-quality teaching materials is labor intensive, and very challenging. To be successful, on-line instruction requires even more organization and forethought in creating courses than otherwise, because there may be only limited interactions with students, and it is difficult to anticipate all possible options. Thoughtful planning and carefully debugged instructions are essential to make the experience more fulfilling for the students. Furthermore, for many kinds of courses, on-line materials must be updated regularly to remain timely.

    There are major concerns regarding who owns the materials (some universities claim proprietary rights to all multimedia courseware), with high likelihood that materials will be purloined or emasculated. Some altruism is desirable in exactly the same sense that open-source software has become such an important driving force. Besides, peer review and ongoing collaborations among instructors could lead to continued improvement of public-domain course materials.

    Administrators might seek cost-saving measures in the common quest for easy answers, less-qualified instructors, mammoth class sizes, and teaching materials prepared elsewhere.

    Loss of interactions among students and instructors is a serious potential risk, especially if the instructor does not realize that students are not grasping what is being taught. This can be partially countered by including some live lectures or videoteleconferenced lectures, and requiring instructors and teaching assistants to be accessible on a regular basis, at least via e-mail. Multicast course communications and judicious use of Web sites may be appropriate for dealing with an entire class. Inter-student contacts can be aided by chat rooms, with instructors hopefully trying to keep the discussions on target. Also, students can be required to work in pairs or teams on projects whose success is more or less self-evident. However, reliability and security weaknesses in the infrastructure suggest that students will find lots of excuses such as ``The Internet ate my e-mail'' variants on the old ``My dog ate my homework'' routine.

    E-education may be better for older or more disciplined students, and for students who expect more than just being entertained. It is useful for stressing fundamentals as well as helping students gain real skills. But only certain types of courses are suitable for on-line offerings -- unfortunately, particularly those courses that emphasize memorization and regurgitation, or that can be easily graded mechanically by evaluation software. Such courses are highly susceptible to cheating, which can be expected to occur rampantly whenever grades are the primary goal, used as a primary determinant for jobs and promotions. Cheating tends to penalize only the honest students. It also seriously complicates the challenge of meaningful professional certification based primarily on academic records.

    Society may find that electronic teaching loses many of the deeper advantages of traditional universities -- where smaller classrooms are generally more effective, and where considerable learning typically takes place outside of classrooms. But e-education may also force radical transformations on conventional classrooms. If we are to make the most out of the challenges, the advice of Brynjolfsson and Hitt (Beyond the Productivity Paradox, CACM 41, 8, 11-12, August 1998) would suggest that new approaches to education will be required, with a ``painful and time consuming period of reengineering, restructuring and organization redesign...''

    There is still a lack of experience and critical evaluation of the benefits and risks of such techniques. For example, does electronic education scale well to large numbers of students in other than rote-learning settings? Can a strong support staff compensate for many of the potential risks? On the whole, there are some significant potential benefits, for certain types of courses. I hope that some of the universities and other institutions already pursuing remote electronic education will evaluate their progress on the basis of actual student experiences (rather than just the perceived benefits to the instructors), and share the results openly. Until then, I vastly prefer in-person teaching coupled with students who are self-motivated.

    --------
    Members of the ACM Committee on Computers and Public Policy and the Computing Research Association Snowbird workshop provided valuable input to this column. (As we go to press, I just saw a relevant article by R.B. Ginsberg and K.R. Foster, ``The Wired Classroom,'' IEEE Spectrum, 34, 8, 44--51, August 1998.)

    =======================================================

    Inside Risks 99, CACM 41, 9, September 1998

    Y2K Update

    Peter G. Neumann

    Somewhere in the wide spectrum from doomsday hype to total disdain lie the realities of the Year-2000 problem. Some computer systems and infrastructures will be OK, but others could have major impact on our lives. We won't know until it happens. At any rate, here is a summary of where we stand with 16 months left.

    Many departments and agencies of the U.S. Government are lagging badly in their efforts to fix their critical computers. The Departments of Transportation, Defense, State, Energy, and Health and Human Services are particularly conspicuous at the bottom of Congressman Stephen Horn's report card. The Social Security Administration seems to be doing better -- although its checks are issued by the Treasury Department, whose compliance efforts Horn labelled ``dismal.''

    The critical national infrastructures discussed in our January and June 1998 columns are increasingly dependent on information systems and the Internet. Public utilities are of concern, particularly among smaller companies. Aviation is potentially at risk, with its archaic air-traffic control systems. Railway transportation is also at risk. There are predictions that the U.S. railroad system will fail; nationwide control is now highly centralized, and manual backup systems for communications, switching and power have all been discarded. Financial systems are reportedly in better shape -- perhaps because the risks are more tangible.

    Many smaller corporations are slow in responding, hoping that someone else will take care of the problem. Developers of many application software packages and indeed some operating systems are also slow. Replacing old legacy systems with new systems is no guarantee, as some newer systems are also noncompliant. In addition, although some systems may survive 1 Jan 2000, they may fail on 29 Feb 2000 or 1 Mar 2000 or 1 Jan 2001, or some other date. Also, even if a system tests out perfectly when dates are advanced to 1 Jan 2000, there are risks that it will not work in conjunction with other systems when that date actually arrives. Even more insidious, some systems that tested successfully with Y2K-crossing dates subsequently collapsed when the dates were set back to their correct values, because of the backward discontinuity!

    Estimates are widely heard about the cost of analysis, prevention, and repair exceeding one trillion dollars. Other estimates suggest that the legal costs could reach the same rather astounding level -- perhaps merely reflecting the extent to which we have become a litigious society. There is a risk that some of the fly-by-night Y2K companies will pack up their tents and vanish immediately after New Year's Eve 1999, to avoid lawsuits. There are also some efforts to put caps on liability, in some cases as an incentive to share information.

    Although a few hucksters are hawking quick fixes, there are in general no easy answers. There are also very serious risks to national, corporate, and personal well-being associated with letting other people fix your software -- with rampant opportunities for Trojan horses, sloppy fixes, and theft of proprietary code. Considerable Y2K efforts are being done abroad.

    Ultimately, Y2K is an international problem with a particularly nonnegotiable deadline and ever increasing interdependence on unpredictable entities. Reports from many other countries are not encouraging. Overall, any nation or organization that is not aggressively pursuing its Y2K preparedness is potentially at risk. Also, where pirated software abounds (as in China and Russia), official fixes may not be accessible.

    One of the strangest risks of all is that even if all of the anticipatory preventive measures were to work perfectly beyond everyone's expectations, engendering no adverse Y2K effects, the media hype and general paranoia could nevertheless result in massive panic and hoarding, including banks running out of cash reserves.

    The Y2K problem is the result of bad software engineering practice and a serious lack of foresight. Y2K has been largely ignored until recently, despite having been recognized long ago: the 1965 Multics system design used a 71-bit microsecond calendar-clock. (Java does even better, running until the year 292271023.) Innovative solutions often stay out of the mainstream unless they are performance related. For example, Multics contributed some major advances that would be timely today in other systems, although Ken Thompson carried some of those concepts into Unix.

    Effects of noncompliant systems have the potential of propagating to other systems, as we have seen here before. Local testing is not adequate, and pervasive testing is often impossible. There is little room for complacency in the remaining months. Oddly, the Y2K problem is relatively simple compared to the ubiquitous security and software engineering problems -- which seem less pressing because there is no fixed doomsdate. Perhaps when 2000 has past, we will be able to focus on the deeper problems. Peter Neumann (http://www.csl.sri.com/neumann/) chairs the ACM Committee on Computers and Public Policy and moderates the on-line Risks Forum.

    =====================================================

    Inside Risks 98, CACM 41, 8, August 1998

    Computer Science and Software Engineering:
    Filing for Divorce?

    Peter J. Denning

    Recent proposals to license software engineers have strained the uneasy tension between computer scientists and software engineers. Computer scientists tend to believe that certification is unnecessary and that licensing would be harmful because it would lock in minimal standards in a changing field of rising standards. Software engineers tend to believe that certification is valuable and licensing is inevitable; they want significant changes in the curriculum for professional software engineers. Frustrated, a growing number of software engineers want to split off from computer science and form their own academic departments and degree programs. Noting other dualities such as chemical engineering and chemistry, they ask, why not software engineering and computer science? [1] Must software engineers divorce computer scientists to achieve this?

    No such rift existed in the 1940s and 1950s, when electrical engineers and mathematicians worked cheek by jowl to build the first computers. In those days, most of the mathematicians were concerned with correct execution of algorithms in application domains. A few were concerned with models to define precisely the design principles and to forecast system behavior.

    By the 1960s, computer engineers and programmers were ready for marriage, which they consummated and called computer science. But it was not an easy union. Computer scientists, who sought respect from traditional scientists and engineers for their discipline, loathed a lack of rigor in application programming and feared a software crisis. Professional programmers found little in computer science to help them make practical software dependable and easy to use. Software engineers emerged as the peacemakers, responding to the needs of professional programming by adapting computer science principles and engineering design practice to the construction of software systems.

    But the software engineers and computer scientists did not separate or divorce. They needed each other. Technologies and applications were changing too fast. Unless they communicated and worked together, they could make no progress at all. Their willingness to experiment helped them bridge a communication gap: Software engineers validated new programming theories and computer scientists validated new design principles.

    Ah, but that was a long time ago. Hasn't the field matured enough to permit the two sides to follow separate paths successfully? I think not: the pace of technological change has accelerated. Even in the traditional technologies such as CPU, memory, networks, graphics, multimedia, and speech, capacity seems to double approximately every 18 months while costs decline. Each doubling opens new markets and applications. New fields form at interdisciplinary boundaries -- examples:

    * New computing paradigms with biology and physics including DNA, analog silicon, nanodevices, organic devices, and quantum devices.

    * Internet computations mobilizing hundreds of thousands of computers.

    * Neuroscience, cognitive science, psychology, and brain models.

    * Large scale computational models for cosmic structure, ocean movements, global climate, long-range weather, materials properties, flying aircraft, structural analysis, and economics.

    * New theories of physical phenomena by ``mining'' patterns from very large (multiple) datasets.

    It is even more important today than in the past to keep open the lines of communication among computer scientists, software engineers, and applications practitioners. Even if they do not like each other, they can work together from a common interest in innovation, progress, and solution of major problems. The practices of experimentation are crucial in the communication process. A recent study suggests that such practices could be significantly improved: Zelkowitz and Wallace found that fewer than 20% of 600 papers advocating new software technologies offered any kind of credible experimental evidence in support of their claims [2]. (See also [3].)

    Separation between the theory and engineering has succeeded in other disciplines because they have matured to the point where they communicate well among their science, engineering, and applications branches. A similar separation would be a disaster for computer science. Spinning off software engineers would cause communication between engineers, theorists, and application specialists to stop. Communication, not divorce, is the answer.

    1. Parnas, D., Software Engineering: An unconsummated marriage. Communications of ACM, September 1997, 128 (Inside Risks).
    2. Zelkowitz, M., and D. Wallace. Experimental models for validating technology. IEEE Computer, May 1998.
    3. Tichy, W. Should Computer Scientists Experiment More? IEEE Computer, May 1998.

    Peter Denning teaches computer science and helps engineers become better designers. He is a former President of ACM and recently chaired the Publications Board while it developed the ACM digital library. (Computer Science Department, 485, George Mason University, Fairfax, VA 22030; 703-993-1525; pjd@gmu.edu.)

    ============================================================

    Inside Risks 97, CACM 41, 7, July 1998

    Laptops in Congress?

    Peter G. Neumann

    Certain U.S. Senators have strongly resisted efforts to allow laptops on the Senate floor. Whereas a few Senators and Representatives have a good understanding of computer-communication technology, many others do not. This month, we examine some of the benefits and risks that might result from the presence of laptops in the Senate and House floors and hearing rooms. In the following enumeration, ``+'' denotes potential advantages, ``--'' denotes possible disadvantages or risks, and ``='' denotes situations whose relative merits depend on various factors. Benefits and risks are both more or less cumulative as we progress from stand-alone to networked laptops.

    Isolated individual laptops:
    + Note-taking that can be recorded and later turned into memos or even legislation
    + Immediate access to proposed and past legislation
    -- Risks of overdependence on laptops (part of a generic risk of technology)

    Locally networked laptops:
    + Immediate nonintrusive prompting by staffers
    + Immediate access to proposed wording changes
    + Ability for e-mail with traveling colleagues
    + Remote countdowns to impending votes
    + Ability to vote remotely from a hearing room (a real convenience, but apparently anathema to Senators + Greater experience with the benefits and risks of on-line technology; risk awareness might inspire Congress to realize the importance of good nonsubvertible computer-communication security and cryptography (soundly implemented, without key-escrow trapdoors).
    = Discovering that Windows (95 or 98) isn't all that's advertised (at the risk of legislation on the structure of operating systems and networks!)
    -- Penetrations by reporters, lobbyists, and others, obtaining private information (as in the recent House cell-phone recording and Secret-Service pager interceptions), altering data, etc.

    Internet access:
    + Ability to communicate by e-mail with colleagues who are traveling
    + Ability to browse the World Wide Web for timely information (although there are risks of being confused by misinformation)
    + Rapid information dissemination
    + Ability to vote remotely even when travelling (requiring an increase in nonsubvertible computer-communication security), and thereby not being chastised at election time for a poor voting record. (Can you imagine there being no excuse for not voting other than the desire to avoid being on the record?)
    = Possibility of receiving unsolicited e-mail spams and denial-of-service attacks -- or improving security!
    -- Possibility of being influenced by lobbyists (not really a laptop-specific risk)!

    Although there are other potential benefits and risks, this summary considers some of the primary issues. Despite any technopessimism that Inside Risks readers may have developed over the past 8 years, I believe that many of the risks can be avoided -- including those requiring the avoidance of human frailty on the part of Senators and Representatives themselves. (Most of these observations also seem to apply to other democracies as well.)

    In conclusion, the benefits of laptops may in the long run outweigh the risks and other disadvantages. A deeper Congressional awareness of the technological and social risks of our technology would in itself be enormously beneficial to the nation as a whole. Better awareness that our infrastructures are not adequately secure, reliable, and survivable (Inside Risks, May 1992) might also result in greater emphasis on increasing their robustness, a need that has been recognized by the President's Commission on Critical Infrastructure Protection (Inside Risks, January 1998). Most universities (often with government encouragement) now require computer literacy of all students, being vital to a technically mobile workforce; perhaps this should also be expected of Congresspersons!

    Whereas cellular phones and pagers are in wide use (as are PC-controlled teleprompters), laptops may be less distracting -- because they do not necessarily cry out for instant responses. If nothing else, they could help Congressional staffers. In that the Senate is very tradition-bound, it may be a long time until we see laptops on the Senate floor. However, an incremental strategy might be appropriate: begin with laptops only for those who wish to keep notes and access files, then expand access to private local nets of individual Congresspersons and their staffers, then migrate to Senate and House intranets, and then perhaps to a closed Congressnet.

    [See http://catless.ncl.ac.uk/Risks/ for the archives of the ACM Risks Forum (risks-request@CSL.sri.com). Also, see Neumann's Senate and House testimonies (http://www.csl.sri.com/neumann/).]

    ============================================================

    Inside Risks 96, CACM 41, 6, June 1998

    Infrastructure Risk Reduction

    Harold W. Lawson

    The FBI and US Attorney General Janet Reno recently announced plans to establish a National Infrastructure Protection Center. Critical processing and communication structures are to be protected against hackers and criminals. Given today's IT infrastructure complexity, I am sure that attaining reasonable effectiveness will require an enormous personnel and equipment investment.

    In addition to dealing with malicious acts, infrastructure reliability and availability is vital. The recent Alan Greenspan disclosure of a New York bank computer failure a few years ago makes me shudder. The Federal Reserve bailed the bank out with a loan of $20 billion. Greenspan admitted that if the loan could not have been supplied, or if other banks simultaneously had the same problem, the entire banking system could have become unstable. Such ``information outages'' are probably common, but, as with this case, covered up to avoid public panic.

    How far away are we from major catastrophes? Will it be Y2K? What would happen if, due to outages, international companies start defaulting on debt payments resulting in business failures? Is there any protection from the resulting mushrooming effect? International information outages could make power and telecommunication outages seem like small inconveniences.

    These and many other related risk questions, lead me to conclude that a concerted effort, aimed at infrastructure risk, must come to the absolute top of national and international political and commercial agendas.

    While there are many sources of risk, there is an undeniable relationship between risk and complexity. Thus, a major part of risk mitigation must be aimed at reducing complexity.

    Today's computer-communication-based system structures are laden with significant unnecessary complexity. This unnecessary complexity is partially, but significantly, due to the mapping of application functions through various levels of languages and middlewares onto poor or inappropriate platforms of system software and hardware.

    Mainstream microprocessors require large quantities of complex software to be useful. This is not a new situation. Even the earlier mainstream IBM 360/370 series suffered in this regard. There is a significant semantic gap between useful higher levels of problem solving (via programming languages) and the instruction repertoires of these machines.

    That current RISC and CISC processors are poor hosts for higher level languages perpetuates the motivation to widely deploy lower levels of programming, including C and C++. This adds unnecessary complexity, the cost and risk of which is borne many times over around the world in developing and especially in maintaining the growing mountain of software.

    In my opinion, complexity and risk reduction must focus on restructuring of the hardware and system software infrastructure components. Restructuring must address programming languages, suitable system software functions, and, most importantly, well defined (verifiable) execution machines for the languages and functions. Further, robust security mechanisms must be integrated into the infrastructure backbone. The restructuring must result in publicly available "standards" that are strictly enforced via independent certification agencies.

    The vital IT infrastructure cannot continue to be based upon caveat emptor (buyer-beware) products. Enforced standards do not eliminate the competitive nature of supplying infrastructure components; nor do they hinder creativity in introducing a virtually unlimited number of value-added products. Standards increase the market potential for good products. For safety-critical computer-based systems in areas such as nuclear energy, aviation, and medical instruments, certification against standards is applied. However, even for these critical embedded systems, there is a pressing need for tougher standards as well as complexity reduction via appropriate architectural structuring of hardware and system software.

    Given the fact that today's suppliers of critical infrastructure components swear themselves free from product responsibility, an insurance-related enforcement solution may be appropriate, analogous to the Underwriters Laboratory certification of electrical products. Before infrastructure products are put on the market, they must be certified against standards in order to limit (but not eliminate) supplier product responsibility and instill public confidence.

    It is time to stop quibbling over trivial issues such as Internet browsers. When catastrophes occur, browsers will seem like small potatoes. Today, we can do fantastic things with electronic circuitry. We must tame this potential and do the right things aimed at reducing infrastructure risk. It is time to take the bull by the horns and find a political and commercial path leading to infrastructure restructuring and enforceable standards!!!

    Harold W. (Bud) Lawson (bud@lawson.se) is an independent consultant in Stockholm. A Fellow of ACM and IEEE, he has contributed to several pioneering hardware, software, and application related endeavors.

    ======================================================================

    Inside Risks 95, CACM 41, 5, May 1998

    In Search of Academic Integrity

    Rebecca Mercuri

    As the use of computers in scholastic disciplines has grown and matured, so have many related issues involving academic integrity. Although the now rather commonplace risks of security breaches (such as falsification of student records, and access to examination or assignment files) are real and still occur, this type of violation has become a small part of an insidious spectrum of creative computer-based student offenses. Academic institutions have responded to this threat by developing integrity policies that typically use punitive methods to discourage cheating, plagiarism, and other forms of misconduct.

    For example, in December 1997, the Testing Center staff at Mercer County Community College (NJ) discovered that eight calculus students had been issued variants of ``a multi-version multiple-choice test, but submitted responses that were appropriate for a completely different set of questions. By falsely coding the test version, they triggered computer scoring of their responses as if they had been given a version of the test which they in fact had never been given.'' [1] The students were suspended, and the Center restructured its system to thwart this sort of deception. This particular incident is noteworthy, because it demonstrates the technological savvy used to circumvent the grading in a course whose tuition was a mere $300, and whose knowledge was essential for further studies. Tangentially, it also provides an illustration of the vulnerability of multi-version mark-sense tallying, a system used in an ever-increasing number of municipalities for voting, a much higher-stakes application. [2]

    The proliferation of affordable computer systems is both a boon and a headache for educators. The great wealth of information available via Internet and World Wide Web is a tremendous asset in course preparation and presentation, but its downside is that teachers need to stay one screen dump ahead of their students in order to issue projects requiring original solutions. Faculty members bemoan the accessibility of term-paper banks, where any of thousands of boilerplate essays can be downloaded for a small fee. For the more affluent (or desperate) student, there are ``writers'' who will provide a custom work that conforms to the most stringent of professorial requirements. Although assignment fraud has always existed, it is now easier and more tempting. In-class writing projects can establish some level of control, but with networked lab rooms, individual contributions become difficult to monitor -- as soon as someone solves a problem, it quickly propagates to the rest of the class. The discreetly passed slip of paper under the desk is now a broadcast e-mail message or part of a password-concealed web site!

    Creative solutions lead to relevance in learning. As a computer-science educator, I have begun to phase out the ``write a heap sort'' and other traditional coding assignments, because so many instances of their solutions exist. Using the Web, these projects have been transformed into ``download various heap sort programs and analyze their code'' which encourages individual exploration of reusable libraries. Perhaps it will not be so long before the ACM Programming Contest contains a component where contestants ``start their search engines'' to ferret out adaptable modules instead of just hacking programs from scratch!

    The motivation of assignments and exams should be the reinforcement of comprehension of the course material and assessment of student progress. Yet, the best way to know what the students know is to know the students, a task made more complicated as classes grow in size and expand to remote learning sites. Ben Shneiderman's Relate-Create-Donate philosophy urges a move to collaborative and ambitious team projects, solving service-oriented problems, with results subsequently publicized on the Web, in order to enhance enthusiasm and understanding. [3] Examination and homework collusion is actually a form of sharing -- albeit with erroneous goals. Perhaps it is now time to promote sharing, at least in some components of our coursework, by finding new ways to encourage group efforts, and monitoring such activities to ensure that learning is achieved by all of the students. This is a challenging task, but one whose implementation would be well rewarded.

    1. From a publication issued by the Vice President of Academic and Student Affairs, Mercer County Community College, February 4, 1998.

    2. See earlier articles by Rebecca Mercuri in CACM 35:11 (November 1992) and 36:11 (November 1993).

    3. Ben Shneiderman, Symposium Luncheon Lecture (preprint), SIGCSE '98, Atlanta.

    Rebecca Mercuri (http://www.mcs.drexel.edu/~rmercuri, mercuri@acm.org) is a full-time member of the Mathematics and Computer Science faculty at Drexel University and also appears as a visiting Artist Teacher in the Music Department at Mercer County Community College.

    ======================================================================

    Inside Risks 94, CACM 41, 4, Apr 1998

    On Concurrent Programming

    Fred B. Schneider

    Concurrent programs are notoriously difficult to get right. This is as true today as it was 30 years ago. But 30 years ago, concurrent programs would be found only in the bowels of operating systems, and these were built by specialists. The risks were carefully controlled. Today, concurrent programs are everywhere and are being built by relatively inexperienced programmers:

    * All sorts of application programmers write concurrent programs. A PC freezing when you pull down a menu or click on an icon is likely to be caused by a concurrent-programming bug in the application.

    * Knowledge of operating system routines is no longer required to write a concurrent program. Java threads enable programmers to write concurrent programs, whether for spiffy animation on web pages or for applications that manage multiple activities.

    This column discusses simple rules that can go a long way toward eliminating bugs and reducing risks associated with concurrent programs.

    A concurrent program consists of a collection of sequential processes whose execution is interleaved; the interleaving is the result of choices made by a scheduler and is not under programmer control. Lots of execution interleavings are possible, which makes exhaustive testing of all but trivial concurrent programs infeasible.

    To make matters worse, functional specifications for concurrent programs often concern intermediate steps of the computation. For example, consider a word-processing program with two processes: one that formats pages and passes them through a queue to the second process, which controls a printer. The functional specification might stipulate that the page-formatter process never deposit a page image into a queue slot that is full and that the printer-control process never retrieve the contents of an empty or partially filled queue slot.

    If contemplating the individual execution interleavings of a concurrent program is infeasible, then we must seek methods that allow all executions to be analyzed together. We do have on hand a succinct description of the entire set of executions: the program text itself. Thus, analysis methods that work directly on the program text (rather than on the executions it encodes) have the potential to circumvent problems that limit the effectiveness of testing.

    For example, here is a rule for showing that some ``bad thing" doesn't happen during execution:

    Identify a relation between the program variables that is true initially and is left true by each action of the program. Show that this relation implies the ``bad thing'' is impossible.

    Thus, to show that the printer-control process in the above example never reads the contents of a partially-filled queue slot (a ``bad thing"), we might see that the shared queue is implemented in terms of two variables:

    * NextFull points to the queue slot that has been full the longest and is the one that the printer-control process will next read.

    * FirstEmpty points to the queue slot that has been empty the longest and is the one where the page-formatter process will next deposit a page image.

    We would then establish that NextFull ~= FirstEmpty is true initially and that no action of either process falsifies it. And, from the variable definitions, we would note that NextFull ~= FirstEmpty implies that the printer-control process reads the contents of a different queue slot than the page-formatter process writes, so the ``bad thing" cannot occur.

    It turns out that all functional specifications for concurrent programs can be partitioned into ``bad things'' that must not happen and ``good things'' that must happen. Thus, a rule for such ``good things'' will complete the picture. To show that some ``good thing'' does happen during execution:

    Identify an expression involving the program variables that when equal to some minimal value implies that the ``good thing'' has happened. Show that this expression
    (a) is decreased by some program actions that must eventually run, and
    (b) is not increased by any other program action.

    Note that our rules for ``bad things'' and ``good things'' do not require checking individual process interleavings. They require only effort proportional to the size of the program being analyzed. Even the size of a large program need not be an impediment---large concurrent programs are often just small algorithms in disguise. Such small concurrent algorithms can be programmed and analyzed; we build a model and analyze it to gain insight about the full-scale artifact.

    Writing concurrent programs is indeed difficult, but there exist mental tools that can help the practicing programmer. The trick is to abandon the habit of thinking about individual execution interleavings.

    Cornell University Professor Fred B. Schneider's textbook On Concurrent Programming (Springer-Verlag, 1997) discusses how assertional reasoning can be used in the analysis and development of concurrent programs.

    ======================================================================

    Inside Risks 93, CACM 41, 3, Mar 1998

    Are Computers Addictive?

    Peter G. Neumann

    Last month we considered some of the risks associated with Internet gambling. While noting that gambling is addictive, and that the Internet can compound the problems therewith, we left implicit the notion that computer use itself might have addictive characteristics. This month we consider that notion further, although we consider primarily compulsive behavior due to psychological and environmental causes rather than pharmacological and physiological addiction.

    To be addicted to something typically means that you have habitually, obsessively, and perhaps unconsciously surrendered yourself to it. In addition to Internet gambling, activities that can lend themselves to addictive or compulsive behavior include playing computer games, being a junkie of unmoderated newsgroups and chat rooms, surfing the Web, browsing for cool tools, cracking system security for amusement, and perhaps even programming itself -- which seems to inspire compulsive behavior in certain individuals.

    We are immediately confronted with the question as to how computers make these problems any different from our normal (uncomputerized) lives. The customary answer seems to be that computers intensify and depersonalize whatever activity is being done, and enable it to be done remotely, more expeditiously, less expensively, and perhaps without identification, accountability, or answerability. Is there more to it than that?

    The effects of compulsive computer-related behavior can involve many risks to individuals and to society, including diminution of social and intellectual skills, loss of motivation for more constructive activities, loss of jobs and livelihood, and so on. A reasonable sense of world reality can be lost through immersion in virtual reality. Similarly, a sense of time reality can be lost through computer access that is totally encompassing and uninterrupted by external events.

    Computerized games have become a very significant part of the lives of many youths today. Although personal-computer games have long been popular, multiuser dungeons (MUDs) and other competitive or collaborative games have emerged. These are perhaps even more addictive than solitary games. Despite their seemingly increased interactions with other people, they may serve as a substitute for meaningful interpersonal communication. This detachment can be amplified further when the other persons involved are anonymous or pseudonymous, and become abstractions rather than real people.

    Chat rooms, newsgroups, and e-mail also educe compulsive behavior. In addition, they can be sources of rampant misinformation, disseminated around the Net with remarkable ease and economy. Compulsive novices seem particularly vulnerable to believing what they read and spreading it further; consequently, they may be likely targets for frauds and scams.

    ``Hackers'' have stereotypically been associated with compulsive behavior -- such as chronically bad eating habits, generally antisocial manners, and in some cases habitual system penetrations. Even the very best programmers may have a tendency toward total absorption in writing and debugging code. There's something about the challenge of pitting yourself against the computer system that is very compelling.

    Among the extreme cases of pathological Internet use studied by Kimberly Young (Internet Addiction: The Emergence of a New Clinical Disorder, American Psychological Association, Chicago, August 14, 1997), most of them involved people without permanent jobs and newbies (rather than experienced computer folks), spending an average of 38 hours a week in cyber-addictive behavior. Young estimates that 10% of Web users qualify as addicts, which may be either an astounding factoid or an extreme use of the term ``addict''.

    So, do we need to do anything about it, and if so, what? Treatments for addictions usually involve total abstinence rather than partial withdrawals -- although some people are psychologically able to live under regimens of moderation. Cybertherapy is apparently booming, with many Internet addicts ironically turning to Internet counseling sites. Parental oversight of minors and employer supervision of employees is often appropriate. However, there are risks of overreacting, such as trying to block Internet sites that offer a preponderance of addicting opportunities (once again, there are risks in seeking technological solutions to nontechnological problems), or legal attempts to outlaw such sites altogether. Whether we are off-line or on-line, we all need to have real lives beyond computers. Achieving that rests on our educational system, our childhood environment, and our workplaces -- but ultimately on ourselves and our associations with other people.

    NOTE: Jim Horning suggests you look at Mihaly Csikszentmihalyi's ``Flow: The Psychology of Optimal Experience'' (Harper-Collins, 1991), which treats programming as flow and resonates with many of the ideas here. Thanks to Jon Swartz of the San Francisco Chronicle for the pointer to Young's paper, noted in his article on August 15, 1997.

    ======================================================================

    Inside Risks 92, CACM 41, 2, Feb 1998

    Internet Gambling

    Peter G. Neumann

    Internet gambling is evidently increasing steadily, with many new on-line gambling houses operating from countries having little or no regulation. Attempts to ban or regulate it are likely to inspire more foreign establishments, including sites outside of territorial waters. Revenues from Internet gambling operation are estimated to reach $8 billion by the year 2000, whereas the current total take for all U.S. casinos is $23 billion.

    We consider here primarily specific risks associated with Internet gambling. Generic risks have been raised in earlier Inside Risks columns, such as Webware security (April 1997), cryptography (August 1997), anonymity (December 1996), and poor authentication (discussed in part in April/May 1994). For example, how would you ensure that you are actually connected to the Internet casino of your choice?

    Gambling suffers from well-known risks including disadvantageous odds, uncertainty of payback, skimming by casinos, personal addiction and ruin. Internet gambling brings further problems, including lack of positive identification and authentication of the parties, the remote use of credit cards, and out-of-jurisdiction casinos. Even if there were some assurance that you are connected to your desired on-line casino (for example, using some form of strong cryptographic authentication), how would you know that organization is reputable? If you are not sure, you are taking an extra gamble -- and technology cannot help you. Payoffs could be rigged. There could also be fraudulent collateral activities such as capture and misuse of credit-card numbers, blackmail, money laundering, and masqueraders using other people's identities -- either winning or racking up huge losses, at relatively little risk to themselves. Serious addicts who might otherwise be observed could remain undetected much longer. (On the Internet no one knows you are a gambler -- except maybe for the casino, unless you gamble under many aliases.)

    Anonymity of gamblers is a particularly thorny issue. Tax-collecting agencies that strongly oppose anonymous gambling might lobby to require to require recoverable cryptographic keys.

    Legislation before the U.S. Congress would prohibit Internet gambling by bringing it under the Interstate Wire Act, with stiff fines and prison terms for both operators and gamblers. It would also allow law enforcement to ``pull the plug'' on illegal Internet sites. It is not clear whether such legislation would hinder off-shore operations -- where casinos would be beyond legal reach, and gamblers might use encryption to mask their activities. Legalization is an alternative; for example, the Australian state of Victoria has decided to strictly regulate and tax on-line gambling, hoping to drive out illegal operations.

    Although Internet gambling can be outlawed, it cannot be stopped. There are too many ways around prohibition, including hopping through a multitude of neutral access sites (for example, service providers), continually changing Internet addresses on the part of the casinos, anonymous remailers and traffic redirectors, encryption and steganography, and so on. On-line gambling could also have harmful legal side-effects, by generating pressure to outlaw good security. However, legally restricting good system security practices and strong cryptography would interfere with efforts to better protect our national infrastructures and with the routine conduct of legitimate Internet commerce. Thus, Internet gambling represents the tip of a giant iceberg. What happens here can have major impacts on the rest of our lives, even for those of us who do not gamble.

    One possibility not included in current legislation would be to make electronic gambling winnings and debts legally uncollectible in the United States. That would make it more difficult for on-line casinos to collect legally from customers. However, with increasingly sophisticated Internet tracking services, it might also inspire some new forms of innovative, unorthodox, life-threatening illegal collection methods on behalf of the e-casinos. It would also exacerbate the existing problem that gamblers are required to report illegal losses if they wish to offset their winnings (legal or otherwise), and would also bring into question the authenticity of computerized receipts of losses.

    Attempts to ban any human activity will never be 100% effective, no matter how self-destructive that behavior may be judged by society. In some cases, the imposition of poorly considered technological ``fixes'' for sociological problems has the potential of doing more harm than good. For example, requiring ISPs to block clandestine illegal subscriber activities is problematic. Besides, the Internet is international. Seemingly easy local answers -- such as outlawing or regulating Internet gambling -- are themselves full of risks.

    The Internet can be addictive, but being hooked into it is different from being hooked on it. In any event, whether or not you want to bet on the Net, don't bet on the Net being adequately secure! Whereas you are already gambling with the weaknesses in our computer-communication infrastructures, Internet gambling could raise the ante considerably. Caveat aleator. (Let the gambler beware!)

    NOTE: Several members of the ACM Committee on Computers and Public Policy contributed to this column.

    ======================================================================

    Inside Risks 91, CACM 41, 1, Jan 1998

    Protecting the Infrastructures

    Peter G. Neumann

    The President's Commission on Critical Infrastructure Protection (PCCIP) has completed its investigation, having addressed eight major critical national infrastructures: telecommunications; generation, transmission, and distribution of electric power; storage and distribution of gas and oil; water supplies; transportation; banking and finance; emergency services; and continuity of government services. The final report (Critical Foundations: Protecting America's Infrastructures. October 1997) is available on the PCCIP Web site (http://www.pccip.gov). Additional working papers are included there as well.

    The PCCIP is to be commended for the breadth and scope of their report, which provides some recommendations for future action that deserve your careful attention. Here is a brief summary of their findings.

    The report identifies pervasive vulnerabilities, a wide spectrum of threats, and increasing dependence on the national infrastructures. It recognizes a serious lack of awareness on the part of the general public. It declares a need for a national focus or advocate for infrastructure protection; although it observes that no one is in charge, it also recognizes that the situation is such that no one single individual or entity could be in charge. It also makes a strong case that infrastructure assurance is a shared responsibility among governmental and private organizations.

    The report recommends a broad program of awareness and education, infrastructure protection through industry cooperation and information sharing, reconsideration of laws related to infrastructure protection, a revised program of research and development, and new thinking throughout. It outlines suggestions for several new national organizations: sector coordinators to represent the various national infrastructures; lead agencies within the federal government; a National Infrastructure Assurance Council of CEOs, Cabinet Secretaries, and representives of state and local governments; an Information Sharing and Analysis Center; an Infrastructure Support Office; and an Office of National Infrastructure Assurance. The report's fundamental conclusion is this: ``Waiting for disaster is a dangerous strategy. Now is the time to act to protect our future.''

    Several conclusions will be of particular interest to readers of the Risks Forum and the Inside Risks column. First, the report recognizes that very serious vulnerabilities and threats exist today in each of the national infrastructures. Second, it recognizes that these national infrastructures are closely interdependent. Third, it observes that all of the national infrastructures depend to some extent on underlying computer-communication information infrastructures, such as computing resources, databases, private networks, and the Internet. These realizations should come as no surprise to us. However, it is noteworthy that a high-level White House commission has made them quite explicit, and also very significant that the PCCIP has recommended some courses of action that have the potential of identifying some of the most significant risks -- and perhaps actually helping to reduce those risks and others that will emerge in the future.

    * The Commission's report is almost silent on the subject of cryptography (see their pages 74--75). It recognizes (in two sentences) that strong cryptography and sound key management are important; it also states (in one sentence) that key management should include key recovery for business access to data and court-authorized law-enforcement access, but fails to acknowledge any of the potential risks associated with key management and key recovery (see this column, August 1996, January 1997, and August 1997) or any of the controversy associated with pending legislation in Congress. In essence, the report confuses the distinctions between key management and key recovery.

    * The risks arising from chronic system-development woes such as the Year-2000 problem and rampant fiascoes associated with large systems (e.g., Inside Risks, December 1997) are almost completely ignored, although an analysis of the Y2K problem is apparently forthcoming.

    * The chapter on research and development is startlingly skimpy, although a four-fold increase in funding levels by 2004 is recommended. Again, further documentation is expected to emerge.

    On the whole, this report is an impressive achievement. The Commission has clearly recognized that protecting the national infrastructures must be a widely shared responsibility, and also that it is a matter of national security -- not just in the narrow sense of the U.S. military and national intelligence services, but in the broader sense of the well-being and perhaps the survival of the nation and the people of this planet.

    NOTE: Peter G. Neumann moderates the ACM Risks Forum (comp.risks). See http://www.csl.sri.com/neumann/, which includes (along with earlier Senate testimonies) his November 6, 1997, written testimony, Computer-Related Risks and the National Infrastructures, for the U.S. House Science Committee Subcommittee on Technology. The written and oral testimonies for that hearing -- including that of PCCIP Chairman Tom Marsh -- are published by the U.S. Government Printing Office. See www.pccip.gov. ======================================================================

    INSIDE RISKS CACM 40, 9, Sep 1997, column 87)

    Software Engineering: An Unconsummated Marriage

    David Lorge Parnas, P.Eng.

    When discussing the risks of using computers, we rarely mention the most basic problem: most programmers are not well educated for the work they do. Many have never learned the basic principles of software design and validation. Detailed knowledge of arcane system interfaces and languages is no substitute for knowing how to apply fundamental design principles.

    The "year 2000 problem" illustrates my point. Since the late 60's, we have known how to design programs so that it is easy to change the amount of storage used for dates. Nonetheless, thousands of programmers wrote millions of lines of code that violated well-accepted design principles. The simplest explanation: those who designed and approved that software were incompetent!

    We once had similar problems with bridges and steam engines. Many who presented themselves as qualified to design, and direct the construction of, those products did not have the requisite knowledge and discipline. The response in many jurisdictions was legislation establishing Engineering as a self-regulating profession. Under those laws, before anyone is allowed to practice Engineering, they must be licensed by a specified "Professional Engineering Association". These associations identify a core body of knowledge for each Engineering speciality. Accreditation committees visit universities frequently to make sure that programs designated "Engineering" teach the required material. The records of applicants for a license are examined to make sure that they have passed the necessary courses. After acquiring supervised experience, applicants must pass additional examinations on the legal and ethical obligations of Engineers. Then, they can write "P.Eng." after their name. Others who practice engineering can be prosecuted. This applies to all specialities within Engineering (Mechanical, Electrical, etc.) although formal registration is most common with Civil Engineers and not required for all jobs.

    When NATO organised two famous conferences on "Software Engineering" three decades ago, most engineers ignored them. Electrical Engineers, interested in building computers, regarded programming as something to be done by others - either scientists who wanted the numerical results or mathematicians interested in numerical methods. Engineers viewed programming as a trivial task, akin to using a calculator. To this day, many refer to programming as a "skill", and deny that there are engineering principles that must be applied when building software.

    The organizers of the NATO conferences saw things differently. Knowing that the engineering profession has always been very protective of its legal right to control the use of the title "Engineer", they hoped the conference title would provoke interest. They had recognized that:

    * Programming was neither Science nor Mathematics. Programmers were not adding to our body of knowledge; they were building products.

    * Using Science and Mathematics to build products for others is what Engineers do.

    * Software was becoming a major source of problems for those who owned and used it. The problems were exactly those to be expected when products are built by people who were educated for other professions and feel that building things is not their "real job".

    * Unfortunately, communication between Engineers and those who study software hasn't been effective. The majority of Engineers understand very little about the science of programming or the mathematics that one uses to analyze a program, and most Computer Scientists don't understand what it means to be an Engineer.

    * Today, with bridges, engines, aircraft, power plants, etc. being designed and/or controlled by software, the same problems that motivated the Engineering legislation, are rampant in the software field.

    * Over the years, Engineering has split into a number of specialities, each centered on a distinct area of engineering science. Engineering Societies must now recognize a new branch of Engineering, Software Engineering and identify its core body of knowledge. Just as Chemical Engineering is a marriage of Chemistry with classical engineering areas such as thermodynamics, mechanics, and fluid dynamics, Software Engineering should wed a subset of Computer Science with the concepts and discipline taught to other Engineers.

    * "Software Engineering" is often treated as a branch of Computer Science. This is akin to regarding Chemical Engineering as a branch of Chemistry. We need both Chemists and Chemical Engineers but they are very different. Chemists are scientists; Chemical Engineers are Engineers. Software Engineering and Computer Science have the same relationship.

    * The marriage will be successful only if the Engineering Societies, and Computer Scientists come to understand that neither can create a Software Engineering profession without the other. Engineers must accept that they don't know enough Computer Science. Computer Scientists will have to recognize that being an Engineer is different from being a Scientist, and that Software Engineers require an education that is very different from their own.

    David Lorge Parnas studies and teaches Software Design in the Faculty of Engineering of McMaster University, Hamilton, Ontario, Canada.

    ======================================================================

    Inside Risks 90, CACM 40, 12, Dec 1997

    More System Development Woes

    Peter G. Neumann

    Our column of October 1993 (System Development Woes, CACM 36, 10) considered some system development efforts that were cancelled, seriously late, overrun, or otherwise unacceptable. In the light of recent fiascos reported in the Risks Forum, it seems timely to reexamine new abandonments and failed upgrades.

    * IRS modernization. In early 1997, after many years, $4 billion spent, extensive criticism from the General Accounting Office and the National Research Council, and reevaluation by the National Commission on Restructuring (``reinventing'') the IRS, the IRS abandoned its Tax Systems Modernization effort. A system for converting paper returns to electronic form was also cancelled, along with the Cyberfile system -- which would have enabled direct electronic taxpayer filing of returns. A GAO report blamed mismanagement and shoddy contracting practices, and identified security problems for taxpayers and for the IRS.

    * Other government systems. The FBI abandoned development of a $500-million new fingerprint-on-demand computer system and crime information database. The State of California spent $1 billion on a nonfunctional welfare database system; it spent more than $44 million on a new motor vehicles database system that was never built; the Assembly Information Technology Committee was considering scrapping California's federally mandated Statewide Automated Child Support System (SACSS), which had already overrun its $100 million budget by more than 200%.

    * The Confirm system. The Intrico consortium's Confirm reservation system development was abandoned -- after five years, many lawsuits, and millions of dollars in overruns. Kweku Ewusi-Mensah analyzed the cancellation (Critical Issues in Abandoned Information Systems Development Projects, Comm.ACM 40, 9, September 1997, pp. 74--80) and gives some important guidelines for system developers who would like to avoid similar problems.

    * Bell Atlantic 411 outage. On November 25, 1996, Bell Atlantic had an outage of several hours in its telephone directory-assistance service, due apparently to an errant operating-system upgrade on a database server. The backup system also failed. The problem -- reportedly the most extensive such failure of computerized directory assistance -- was resolved by backing out the software upgrade.

    * San Francisco 911 system. San Francisco tried for three years to upgrade its 911 system, but computer outages and unanswered calls remain rampant. For example, the dispatch system crashed for over 30 minutes in the midst of a search for an armed suspect (who escaped). It had been installed as a temporary fix to recurring problems, but also suffered from unexplained breakdowns and hundreds of unanswered calls daily.

    * Social Security Administration. The SSA botched a software upgrade in 1978 that resulted in almost 700,000 people being underpaid an estimated $850 million overall, as a result of cutting over from quarterly to annual reporting. Subsequently, the SSA discovered that its computer systems did not properly handle certain non-Anglo-saxon surnames and married women who change their names. This glitch affected the accumulated wages of $234 billion for 100,000 people, some going back to 1937. The SSA also withdraw its Personal Earnings and Benefit Estimate Statement (PEBES) Website (see Inside Risks, July 1997) for further analysis, because of many privacy complaints.

    * NY Stock Exchange. The New York Stock Exchange opened late on December 18, 1995 because of communications software problems, after a weekend spent upgrading the system software. It was the first time since December 27, 1990, that the exchange had to shut down -- and it affected various other Exchanges as well.

    * Interac. On November 30, 1996, the Canadian Imperial Bank of Commerce Interac service was halted by an attempted software upgrade, affecting about half of all would-be transactions across eastern Canada.

    * Barclays Bank's successful upgrade. In one of the rare success stories in the RISKS archives, Barclays Bank shut down its main customer systems for a weekend to cut over to a new distributed system accommodating 25 million customer accounts. This system seamlessly replaced three incompatible systems. It is rumored that Barclays spent at least 100 pounds million on the upgrade.

    The causes of these difficulties are very diverse, and not easy to characterize. It is clear from these examples that deep conceptual understanding and sensible system- and software-engineering practice are much more important than merely tossing money and people into system developments. Incidentally, we have not even mentioned the Year-2000 problem -- primarily because we must wait until January 2000 to adequately assess the successes and failures of some of the ongoing efforts. But all of the examples here suggest that we need much greater sharing of the bad and good experiences.

    NOTE: Peter G. Neumann moderates the ACM Risks Forum (comp.risks), which provides background on all of these cases and which can be searched at http://catless.ncl.ac.uk/Risks/ .

    ======================================================================

    Inside Risks 84, CACM 40, 6, June 1997

    Spam, Spam, Spam!

    Peter G. Neumann and Lauren Weinstein

    Are you flooded with Internet spams (unsolicited e-mail advertisements) from hustlers, scammers, and purveyors of smut, net sex, get-rich-quick-schemes, and massive lists of e-mail addresses? (The term derives from the World-War-II ubiquitous canned-meat product dramatized by Monty Python.) Some of us -- particularly moderators of major mailing lists -- typically receive dozens of spams each day, often with multiple copies. We tend to delete replicated items without reading them, even if the subject line is somewhat intriguing. (Many spammers use deceptive Subject: lines.) Unmoderated lists are particularly vulnerable to being spammed.

    Some spammers offer to remove you from their lists upon request. However, when you reply, you may discover that their From: and Reply-to: addresses are bogus and their provided ``sales'' phone number may be valid only for a few days. Some of them are legitimate, but others may be attempting credit or identity fraud; it can be hard to tell the difference.

    What might you do to stanch the flow? Some folks suggest not posting to newsgroups or mailing lists--from which spammers often cull addresses, but this throws out the baby with the bathwater. Other folks suggest using the spammer's trick of a bogus From: address, letting your recipients know how to generate your real address. But this causes grief for everybody (recipients, administrators, and even you if the mail is undeliverable), and is a bad idea.

    Filtering out messages from specific domains may have some success at the IP level (e.g., via firewalls and TCP-wrappers) against centralized spammers who operate their own domains and servers. But filtering based on header lines is generally not effective, because the headers are subject to forgery and alterations. Also, many spammers route their junk through large ISPs, or illicitly through unwitting hosts. Complaining to those site administrators is of little value. Filtering out messages based on offensive keywords is also tricky, because it may reject e-mail that you really want.

    A service whereby senders must first acquire an authorized certificate to send you e-mail would be impractical and undesirable for many individuals. It would certainly hinder newsgroups that seek worldwide contributions and subscriptions.

    Technical options are of limited value in the real world, tending toward an offensive-defensive escalation of technical trickery. Alternatively, legislation might be contemplated, for example, to require an individual's permission for the release of certain personal information to third parties, and to treat unsolicited e-mail more like unsolicited junk faxes. On the other hand, there is a serious risk of legislative overreaction with draconian laws that might kill the proverbial golden goose.

    E-mail spam differs somewhat from postal mail. You must pay (one way or another) for the storage of e-mail you receive (or else delete it as fast as it comes in!), whereas the sender pays for postal junk mail. The spam sender pays almost nothing to transmit, especially when hacking into an unsuspecting third-party server site (which is increasingly common). Simson Garfinkel RISKS vol. 18 no. 79) notes that a spammer recently hacked vineyard.net, sending about 66,000 messages.

    There are other spam-like problems, such as recent forged subscriptions to automated list servers in the name of unwitting victims such as the White House and Newt Gingrich. Servers such as majordomo can be used to invoke manual processing of suspicious would-be subscriptions, particularly when the From: address and the given address differ.

    Many such problems exist because the Internet has cooperative decentralized control; but that's also its beauty. It has very limited intrinsic security (although improving), and relies heavily on its constituent systems. In the absence of meaningful authentication and authorization, clever perpetrators are not easy to identify or hold accountable. But swinging too far toward forced authentication impacts privacy and freedom-of-speech issues. What a tangled Web we weave!

    Asking what you can do individually may be the wrong question; the technical burden must ultimately fall on ISPs and software developers, as they continue to pursue approaches such as blocking third-party use of SMTP mail-server ports and requiring authentication for mass mailings. As RISKS and PRIVACY readers know, fully automated mechanisms will always have deficiencies, and security is always a weak-link problem.

    Spamming will ultimately be dealt with through a combination of legislation, ISP administrative changes, further technological developments, and individual efforts. We must find ways to protect ourselves without undermining free enterprise, freedom of speech rights, and common sense, and without encumbering our own normal use -- a difficult task indeed! In the meantime, perhaps the best you can do yourself is to never, ever, respond positively to a spammer's ad!

    Lauren Weinstein (lauren@vortex.com) moderates the PRIVACY Forum (privacy-request@vortex.com; www.vortex.com). Peter Neumann moderates the ACM Risks Forum (risks-request@csl.sri.com; http://catless.ncl.ac.uk/Risks).

    ======================================================================

    Inside Risks 82, CACM 40, 4, April 1997

    Webware Security

    Ed Felten

    Many systems, including Java, ActiveX, JavaScript, and Web plug-ins, allow Web authors to attach an executable program to a Web page, so that anyone visiting the Web page automatically downloads and runs the program. These systems (collectively known as Webware) offer unique security challenges.

    This is not a new problem: people have always passed programs around. What is new is the scale and frequency of downloading, and the fact that it happens automatically without conscious human intervention. In one (admittedly unscientific) recent experiment, a person was found to have downloaded and run hundreds of Webware programs in a week. The same person ran only four applications from his own computer.

    The danger in using Webware lies in the fact that simply visiting a Web page may cause you to unknowingly download and run a program written by someone you don't know or don't trust. That program must be prevented from taking malicious actions such as modifying your files or monitoring your online activities, but it must be allowed to perform its benign and useful functions. Since it is not possible (even in theory) to tell the difference between malicious and benign activity in all cases, we must accept some risk in order to get the benefits of Webware.

    Despite the danger, Webware is popular because it meets a real need. People want to share documents, and they want those documents to be dynamic and interactive. They want to browse --- to wander anywhere on the Net and look at whatever they find.

    Webware Security Models

    There are two approaches to Webware security, the all-or-nothing model and the containment model. The all-or-nothing model is typified by Microsoft's ActiveX and by Netscape plug-ins. These systems rely on the user to make an all-or-nothing decision about whether to run each downloaded program. A program is either downloaded and run without any further security protection, or refused outright.

    This decision can be made by exploiting digital signatures on downloaded programs. The author of a program, and anyone else who vouches that the program is well-behaved, can digitally sign it. When the program is downloaded, the user is shown a list of signers and can then decide whether to run the program.

    The containment model is typified by Java from Sun Microsystems. Java allows any program to be downloaded, but tries to run that program within a contained environment in which it cannot do any damage. (For some reason this contained environment is called ``the sandbox,'' though real-world sandboxes are good at containing neither sand nor toddlers.)

    Problems with Both Models

    Both approaches have had problems. The problem with the all-or-nothing model is subtle but is impossible to fix: it puts too much burden on the user. Users are constantly bothered with questions, and they must choose between two equally unacceptable alternatives: discard the program sight unseen, or give the program free rein to damage the user's system. Experience shows that people who are bothered too often stop paying attention and simply say "OK" to every question --- not an attitude conducive to security. The all-or-nothing model causes trouble because it doesn't allow users to browse.

    The main problem with the containment model is its complexity. In Java, for example, there is a large security perimeter to defend, and several flaws in both design and implementation have been found, leading to the possibility of serious security breaches. Though all of the known problems have been fixed at this writing, there is no guarantee that more problems won't be found. (For a general discussion of Java security issues, see Gary McGraw and Edward W. Felten, Java Security: Hostile Applets, Holes and Antidotes, John Wiley and Sons, New York, 1997.)

    Another problem with the containment model is that it is often too restrictive. Java, for example, prohibits downloaded programs from accessing files. Though this prevents malicious programs from reading or tampering with the user's private data, it also makes legitimate document-editing programs impossible.

    The restrictiveness problem can be addressed by making the security policy more flexible using digital signatures. When a person runs a program, their browser can verify the signatures and the person can decide whether to grant the program more privileges because of who signed it. In theory, this allows users to make finely calibrated decisions about which programs to trust for which purposes. In practice, this approach is likely to have some of the problems of the all-or-nothing model. Users will be asked too many questions, so they will get tired and stop paying attention.

    Still, the containment model has some advantages. Granting only a few privileges may expose the user to less risk than letting down all security barriers. And containment at least allows the system to log a program's activities.

    The Challenge

    Webware security is difficult because of human nature. People want to browse without worrying about security, but browsing Webware is dangerous. Only a person can decide who or what is trustworthy and how to weigh the benefits of a particular decision against the risks, but human attention to security is a precious resource that we must spend carefully.

    Professor Felten is in the Department of Computer Science, Princeton University.

    ======================================================================

    Inside Risks 41, CACM 36, 11, November 1993, p. 122

    Corrupted Polling

    Rebecca Mercuri

    Traditionally, the November off-year elections draw little attention, as only a handful of federal positions are filled. Voter turnouts of 30% or less are common in many municipalities. But these elections are far from insignificant, because local posts won in odd-numbered years frequently provide office-holders with the power to make procurements and appointments.  Through the grass-roots election process, Boards of Elections are staffed at city, county and state levels, and these Board members are currently the key decision-makers in the ongoing conversion from lever and manual voting systems to electronic ballot tabulation in the U.S.A.

    As vast metropolises adopt computer ballot-counting methods (including punch-card, mark-sense and direct-entry systems), the question arises whether a national or local election can be "thrown" via internal or external manipulation of hardware, software and/or data. Proponents of electronic voting systems say sufficient controls are being exercised, such that attempts to subvert an election would be detectable. But speakers at a recent session on security and auditability of electronic vote-tabulation systems [1] pointed out that the Federal Election Commission has provided only voluntary voting system standards that may not be adequate to ensure election integrity. Numerous incidents of electronic voting difficulties have come to the attention of the press, although to date there have been no convictions for vote-fraud by computer.

    One of the more interesting recent cases occurred during the March 23, 1993, city election in St. Petersburg, Florida. Two systems for ballot tabulation were being used on a trial basis. For an industrial precinct in which there were no registered voters, the vote summary showed 1,429 votes for the incumbent mayor (who incidentally won the election by 1,425 votes). Officials explained under oath that this precinct was used to merge regions counted by the two computer systems, but were unable to identify precisely how the 1,429 vote total was produced. Investigation by the Pinellas Circuit Court revealed sufficient procedural anomalies to authorize a costly manual recount, which certified the results. The Florida Business Council continues to look into this matter.

    Equipment-related problems are a source of concern to Election Boards, especially when time-critical operations must be performed. The Columbus Dispatch reported (June 12, 1992) that 40 of the 758 electronic machines used in Franklin County's June primary required service on election day. Noted is the fact that only 13 of the County's 1500 older mechanical lever machines needed repair during the election. Of the defective electronic machines, 7 of the voter ballot cartridges were not able to be loaded into the tallying computers so those precincts' results had to be hand-keypunched; power boards in 10 of the machines had blown fuses; 18 had malfunctions with the paper tape on which the results were printed. Difficulties with the central software for merging the electronic and mechanical tallies created further delays in reporting results. Officials decided to withhold the final payment of $1.7M of their $3.82M contract until greater reliability is assured.

    If Franklin County did not have enough trouble already, two electronic ballot tabulation vendors are presently contesting the contract award. MicroVote Corporation is suing the R.F. Shoup Corp., Franklin County, and others in U.S. District Court for the Southern District of Ohio, Columbus Division, for over $10M in damages, claiming conspiracy and fraud in the bidding process. This matter is, as yet, unresolved.

    In another region of Ohio, in the same primary, the Cleveland Plain Dealer (June 11, 1992) reported that Kenneth J. Fisher, member of the Cuyahoga County Board of Elections, allowed an employee to feed a computer a precinct identification card that was not accompanied by that precinct's ballots, during the vote tabulation process. Apparently, the ballots cast in the Glenville region had been inadvertently misplaced, and at 1 A.M. the board members "were tired and wanted to go home" so the election official authorized the bogus procedure, despite the fact that doing so might have constituted a violation of state law. Subsequent inquiry did not lead to any indictments.

    Technology alone does not eliminate the possibility of corruption and incompetence in elections; it merely changes the platform on which they may occur. The voters and the Election Boards who serve them must be made aware of the risks of adopting electronic vote-tallying systems, insisting that the checks and balances inherent to our democracy be maintained.

    [1] Papers by Saltman, Mercuri, Neumann and Greenhalgh, Proc. 16th National Computer Security Conf., NIST/NCSC, Baltimore MD, Sep. 20-23, 1993. Inside Risks columns by Neumann (Nov. 1990) and Mercuri (Nov. 1992) give further background.

    Rebecca Mercuri is a research fellow at the University of Pennsylvania, where she is completing her dissertation on Computational Acoustics in the Computer and Information Science Department. She frequently testifies as an expert witness on computer security and voting systems. E-mail : mercuri@acm.org

    ======================================================================

    Inside Risks 29, CACM 35, 11, November 1992

    Voting-Machine Risks

    Rebecca Mercuri

    On July 23, 1992, New York City Mayor Dinkins announced that 7000 Direct Recording Electronic (DRE) voting machines would be purchased from Sequoia Pacific, pending the outcome of public hearings. This runs counter to advice of the NY Bar Association, independent groups of concerned scientists and citizens (such as Election Watch, CPSR and NYPIRG), and SRI International (a consultant to NYC, and the system evaluators), all of whom have indicated that the equipment is not yet fit for use.

    Background. At first glance, most DREs appear similar to mechanical `lever' voting machines. Lacking any visual identification as `computers' (no monitors or keyboards), voters would be unlikely to assume that one or more (in some cases, as many as nine) microprocessors are housed in the units. The ballot is printed on paper which is mounted over a panel of buttons and LEDs. A thin piece of flexible plastic covers the ballot face, to protect it from damage or removal. The machine is housed in an impact and moisture-resistant case, shielded from EMI, and protected by battery back-up in the event of power loss. At the start of the election session, poll workers run through a procedure to make the machine operational, and similarly follow another sequence (which produces a printed result total) to shut the device down at the end of the day. A cartridge which contains the record of votes (scrambled for anonymity) is removed and taken to a central site for vote tallying.

    Risks. The astute reader, having been given this description of the system, should already have at least a dozen points of entry in mind for system tampering. Rest assured that all of the obvious ones (and many of the non-obvious ones) have been brought to the attention of the NYC Board of Elections. Furthermore, in SRI's latest published evaluation (June 19, 1991) the Sequoia Pacific AVC Advantage (R) systems failed 15 environmental/engineering requirements and 13 functional requirements including resistance to dropping, temperature, humidity and vibration. Under the heading of reliability, the vendor's reply to the testing status report stated: ``SP doesn't know how to show that [the Electronic Voting Machine and its Programmable Memory Device] meets requirement -- this depends on poll workers' competence.''

    The Pennsylvania Board of Elections examined the system on July 11, 1990, and rejected it for a number of reasons, including the fact that it ``can be placed inadvertently in a mode in which the voter is unable to vote for certain candidates'' and it ``reports straight-party votes in a bizarre and inconsistent manner.'' When this was brought to the attention of NYCBOE, they replied by stating that ``the vendor has admitted to us that release 2.04 of their software used in the Pennsylvania certification process had just been modified and that it was a mistake to have used it even in a certification demonstration.'' Needless to say, the machines have not yet received certification in Pennsylvania.

    Other problems noted with the system include its lack of a guaranteed audit trail (see Inside Risks, CACM 33, 11, November 1990), and the presence of a real-time clock which Pennsylvania examiner Michael Shamos referred to as ``a feature that is of potential use to software intruders.''

    Vaporware. Sequoia Pacific has now had nearly four years from when they were told they would be awarded the contract (following a competitive evaluation of four systems) if they could bring their machines up to the specifications stated in the Requirements for Purchase. At an August 20 open forum, a SP representative stated publicly that no machine presently existed that could meet those standards. Yet the city intends to award SP the $60,000,000 contract anyway, giving them 18 months to satisfy the RFP and deliver a dozen machines for preliminary testing (the remainder to be phased in over a period of six years).

    Conclusions. One might think that the election of our government officials would be a matter that should be covered by the Computer Security Act of 1987, but voting machines, being procured by the states and municipalities (not by the Federal government) do not fall under the auspices of this law, which needs to be broadened. Additionally, no laws in N.Y. state presently preclude convicted felons or foreign nationals from manufacturing, engineering, programming or servicing voting machines.

    This would not be so much of a concern, had computer industry vendors been able to provide fully auditable, tamper-proof, reliable, and secure systems capable of handling anonymous transactions. Such products are needed not only in voting, but in the health field for AIDS test reporting, and in banking for Swiss-style accounts. It is incumbent upon us to devise methodologies for designing verifiable systems that meet these stringent criteria, and to demand that they be implemented where necessary. ``Trust us'' should not be the bottom line for computer scientists.

    Rebecca Mercuri (mercuri@acm.org) is a Research Fellow at the University of Pennsylvania's Moore School of Engineering and a computer consultant with Notable Software. She has served on the board of the Princeton ACM chapter since its inception in 1980. Copyright (C) 1992 by Rebecca Mercuri.

    ======================================================================

    Inside Risks CACM 34, 2, Feb 1991

    Should Computer Professionals Be Certified?

    Peter G. Neumann

    Background The Risks Forum has covered numerous cases in which software developers were at least partially responsible for disasters involving computer systems. We summarize here a recent discussion (ACM Soft. Eng. Notes 16, 1, Jan 1991) on whether software developers should undergo professional certification, as in engineering disciplines.

    Discussion John H. Whitehouse made various arguments in favor of certification. There are not enough qualified people. Managers are not technical enough. Many practitioners survive despite poor performance, while many excellent people do not receive adequate credit. ``Hiring is expensive and usually done pretty much in the blind. Firing is risk-laden in our litigious society. ... It is my contention that the vast majority of software defects are the product of people who lack understanding of what they are doing. These defects present a risk to the public, and the public is not prepared to assess the relative skill level of software professionals.'' Fear of failing may cause some people to oppose voluntary certification. ``Furthermore, academics have not joined in the debate, because they are generally immune from the problem.''

    Theodore Ts'o presented an opposing view. He sees no valid way to measure software `competence'. ``There are many different software methodologies, all with their own adherents; trying to figure out which ones of them are `correct' usually results in a religious war.'' He also expressed serious concern that, under a certification system, the software profession might become a guild, protecting mediocrity and excluding really qualified people.

    Martyn Thomas noted that certification does not necessarily help. Also, creating a closed shop is inherently risky, because it enhances the status and incomes of those admitted at the expense of those excluded, and can easily become a conspiracy to protect the position of the members. However, on balance, some certification is desirable, ``for staff who hold key positions of responsibility on projects that have significance for society.'' He added that many countries already have mandatory certification for other engineers. (The UK has also recently established some stringent standards for developing safety-critical software, DEFSTAN 00-55 and 56.)

    Gary Fostel noted the problem of scale: there are significant differences between small systems and large ones. ``Large, complex software systems have problems that are not readily visible in the small-scale applications. In my software development courses, I commonly tell students that the methods that will be required of them are not necessarily the most efficient methods for the class project required of them. For the trival sort of work I can require of students in a semester, there is really no need for comments ... requirements analysis ... and formal design, and so on for most of the techniques of software engineering. On the other hand, as the size of the problem grows, and the customer becomes distinct from the development, and the development stuff becomes fluid, and the effort expands in numerous other dimensions toward bewildering complexity, the methods ... are in fact necessary...''

    Paul Tomblin observed the `Ritual of the Calling of an Engineer' (the Iron Ring), created by Rudyard Kipling before there was a legal status for Engineers, and a line from its `Obligation': ``For my assured failures and derelictions, I ask pardon beforehand of my betters and my equals in my calling...'' Paul added, ``So we admit that everyone fails at some time, and we aren't going to crucify you if you screw up, providing you did so honestly, and not because you were lazy or unprofessional.''

    Russell Sorber noted the voluntary certification provided by the Institute for Certification of Computer Professionals in Park Ridge IL. Nurses, physicians, pilots, civil engineers (even hair stylists) are licensed; he reiterated the thought that he would like life-critical systems to be built by licensed or certified professionals. John Whitehouse added that the ICCP takes great pains to prevent development of a guild mentality, e.g., with continual review and updating of the certification process.

    There was also some discussion of whether certification would stifle creativity, originality and excellence; in summary, it might, but not necessarily.

    Conclusions This an old debate. The views were generally on the side of certification, with various caveats. There is need for a balanced position in which there is some certification of both individuals and institutions involved in the development of high-risk computer systems, but in which the certification process itself is carefully circumscribed. Certification of the systems produced is also important. Teaching and systematic use of modern development techniques are also important pieces of the puzzle, as is the reinforcement of ethical behavior. Martyn Thomas noted that certification is only a mechanism for control, and has to be exercised in the right direction if there is to be an improvement.

    Peter G. Neumann is Chairman of the ACM Committee on Computers and Public Policy, Moderator of the ACM Forum on Risks to the Public in the Use of Computers and Related Systems, and Editor of ACM SIGSOFT's Software Engineering Notes. Contact risks-request@csl.sri.com for on-line receipt of RISKS.

    ======================================================================

    Inside Risks 5, CACM 33, 11, p.170, November 1990

    Risks in Computerized Elections

    Peter G. Neumann

    Background. Errors and alleged fraud in computer-based elections have been recurring Risks Forum themes. The state of the computing art continues to be primitive. Punch-card systems are seriously flawed and easily tampered with, and still in widespread use. Direct recording equipment is also suspect, with no ballots, no guaranteed audit trails, and no real assurances that votes cast are properly recorded and processed. Computerized elections are being run or considered in many countries, including some notorious for past riggings; thus the risks discussed here exist worldwide.

    Erroneous results. Computer-related errors occur with alarming frequency in elections. Last year there were reports of uncounted votes in Toronto and doubly counted votes in Virginia and in Durham, North Carolina. Even the U.S. Congress had difficulties when 435 Representatives tallied 595 votes on a Strategic Defense Initiative measure. An election in Yonkers NY was reversed because of the presence of leftover test data that accumulated into the totals. Alabama and Georgia also reported irregularities. After a series of mishaps, Toronto has abandoned computerized elections altogether. Most of these cases were attributed to ``human error'' and not ``computer error'' (cf. the October 1990 Inside Risks column), and were presumably due to operators and not programmers; however, in the absence of dependable accountability, who can tell?

    Fraud. If wrong results can occur accidentally, they can also happen intentionally. Rigging has been suspected in various elections, but lawsuits have been unsuccessful, particularly in the absence of incisive audit trails. In many other cases, fraud could easily have taken place. For many years in Michigan, manual system overrides were necessary to complete the processing of noncomputerized precincts, according to Lawrence Kestenbaum. The opportunities for rigging elections are manifold, including the installation of trapdoors and Trojan horses, child's play for vendors and knowledgeable election officials. Checks and balances are mostly placebos, and easily subverted. Incidentally, Ken Thompson's oft-cited Turing lecture, Commun. ACM 27, 8, (August 1984) 761-763, reminds us that tampering can occur even without any source-code changes; thus, code examination is not enough.

    Discussion. The U.S. Congress has the constitutional power to set mandatory standards for Federal elections, but has not yet acted. Existing standards for designing, testing, certifying, and operating computerized vote-counting systems are inadequate and voluntary, and provide few hard constraints, almost no accountability, and no independent expert evaluations. Vendors can hide behind a mask of secrecy with regard to their proprietary programs and practice, especially in the absence of controls. Poor software engineering is thus easy to hide. Local election officials are typically not sufficiently computer-literate to fully understand the risks. In many cases, the vendors run the elections.

    Reactions in RISKS. John Board at Duke University expressed surprise that it took over a day for the doubling of votes to be detected in eight Durham precincts. Lorenzo Strigini reported last November on a read-ahead synchronization glitch and an operator pushing for speedier results, which together caused the computer program to declare the wrong winner in a city election in Rome, Italy. Many of us have wondered how often errors or frauds have remained undetected.

    Conclusions. Providing sufficient assurances for computerized election integrity is a very difficult problem. Serious risks will always remain, and some elections will be compromised. The alternative of counting paper ballots by hand is not promising. But we must question more forcefully whether computerized elections are really worth the risks, and if so, how to impose more meaningful constraints.

    Peter G. Neumann is chairman of the ACM Committee on Computers and Public Policy, moderator of the ACM Forum on Risks to the Public in the Use of Computers and Related Systems, and editor of ACM SIGSOFT's Software Engineering Notes (SEN). Contact risks-request@csl.sri.com for on-line receipt of RISKS.}

    References. The Virginia, Durham, Rome, Yonkers, and Michigan cases were discussed in ACM Software Engineering Notes 15, 1 (January 1990), 10-13. Additinal cases were discussed in earlier issues. For background, see Ronnie Dugger's New Yorker article, 7 November 1988, and a report by Roy G. Saltman, Accuracy, Integrity, and Security in Computerized Vote-Tallying, NIST (NBS) special publication, 1988. Also, see publications by two nongovernmental organizations, Computer Professionals for Social Responsibility (POBox 717, Palo Alto CA 94302) and Election Watch (a project of the Urban Policy Research Institute, 530 Paseo Miramar, Pacific Palisades CA 90272).

    ======================================================================