colums since

Inside Risks Columns since January 2008

If you wish to see earlier Inside Risks columns, most of those through December 2003 are at,
linked there along with all subsequent columns. For efficiency of access, columns beginning with January 2004 are in files by year and can be accessed directly with the appropriate index number following a # at the end of the file URL.
For 2004 columns, see
For 2005 columns, see
For 2006 columns, see
For 2007 columns, see

NOTE: Reuse for commercial purposes is subject to CACM and author copyright policy.

Inside Risks Columns since January 2008

  • Toward Total-System Trustworthiness; Considering how to achieve the long-term goal to systemically reduce risks. Peter G. Neumann, June 2022.
  • The Risks of Election Believability (or Lack Thereof), Rebecca T. Mercuri and Peter G. Neumann, June 2021
  • A Holistic View of Future Risks, Peter G. Neumann, October 2020.
  • How to Curtail Oversensing in the Home: Limiting sensitive information leakage via smart-home sensor data. Connor Bolton, Kevin Fu, Josiah Hester, and Jun Han, June 2020.
  • Are You Sure Your Software Will Not Kill Anyone? Using software to control potentially unsafe systems requires the use of new software and system engineering approaches. Nancy Leveson, February 2020.
  • How Might We Increase System Trustworthiness? Summarizing some of the changes that seem increasingly necessary to address known system and network deficiencies and anticipate currently unknown vulnerabilities, Peter G. Neumann, October 2019.
  • Through Computer Architecture, Darkly: Total-system hardware and microarchitectural issues are becoming increasingly critical, A, Theodore Markettos, Robert N.M. Watson, Simon W. Moore, Peter Sewell, and Peter G. Neumann, June 2019. DOI: 10.1145/3277584
  • The Big Picture: A systems-oriented view of trustworthiness, Steven Bellovin and Peter G. Neumann, Copyright 2018 held by Authors. 0001-0782/2018/MonthOfPublication November 2018, DOI: 10.1145/3277564
  • Risks of Cryptocurrencies, Nicholas Weaver, June 2018
  • Risks of Trusting the Physics of Sensors: Protecting the Internet of Things with embedded security. Kevin Fu and Wenyuan Xu, February 2018
  • The Real Risks of Artifical Intelligence: Incidents from the early days of AI research are instructive in the current AI environment, David Lorge Parnas, October 2017
  • Trustworthiness and Truthfulness Are Essential: Their absence can introduce huge risks, Peter G. Neumann, June 2017
  • The Future of the Internet of Things: The IoT can become ubiquitous worldwide -- if the pursuit of systemic trustworthiness can overcome the potential risks, Ulf Lindqvist and Peter G. Neumann, February 2017
  • Risks of Automation: A Cautionary Total-System Perspective of Our Cyberfuture, Peter G. Neumann, October 2016
  • The Risks of Self-Auditing Systems: Unforeseen problems can result from the absence of impartial independent evaluations. Rebecca T. Mercuri and Peter G. Neumann, June 2016
  • Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications, Harold Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Michael A. Specter, Daniel J. Weitzner, October 2015
  • Routing Money, Not Packets: Revisiting Network Neutrality, Vishal Misra, June 2015
  • Far-Sighted Thinking about Deleterious Computer-Related Events: Considerably more anticipation is needed for what might seriously go wrong. Peter G. Neumann, February 2015
  • Risks and Myths of Cloud Computing and Cloud Storage: Considering existing and new types of risks inherent in cloud services. Peter G. Neumann, October 2014
  • EMV: Why Payment Systems Fail: What lessons might we learn from the chip cards used for payments in Europe, now that the U.S. is adopting them too? Ross Anderson and Steven Murdoch, June 2014
  • An Integrated Approach to Safety and Security Based on System Theory: Applying a more powerful new safety methodology to security risks, Nancy Leveson and William Young, February 2014
  • Controlling for Cybersecurity Risks of Medical Device Software: Medical device hacking is a red herring. But the flaws are real. Kevin Fu and James Blum, October 2013
  • Learning from the Past to Face the Risks of Today: Achieving high-quality safety-critical software requires much more than just rigorous development processes, Nancy Leveson, June 2013
  • More Sight on Foresight: Reflecting on elections, natural disasters, and the future, Peter G. Neumann, February 2013
  • The Foresight Saga, Redux: Short-term thinking is the enemy of the long-term future, Peter G. Neumann, October 2012
  • The Cybersecurity Risk: Increased attention to cybersecurity has not resulted in improved cybersecurity, Simson Garfinkel, June 2012
  • Yet Another Technology Cusp: Confusion, Vendor Wars, and Opportunities, Don Norman, February 2012
  • Modernizing the Danish Democratic Process, Carsten Shürmann, October 2011
  • The Risks of Stopping Too Soon, David L. Parnas, June 2012
  • The Growing Harm of Not Teaching Malware, George Ledin, Jr., February 2011, pp. 32--34
  • Risks of Undisciplined Development, David L. Parnas, October 2010, pp. 25--27
  • Privacy by Design: Moving from Art to Practice, Stuart S. Shapiro, June 2010
  • The Need for a National Cybersecurity Research and Development Agenda, Douglas Maughan, February 2010
  • Reflections on Conficker: An insider's view of the analysis and implications of the Conficker conundrum, Phillip Porras, October 2009
  • Reducing Risks of Implantable Medical Devices: A Prescription to Improve Security and Privacy of Pervasive Health Care, Kevin Fu, June 2009
  • U.S. Election After-Math, Peter G. Neumann, February 2009
  • Risks of Neglecting Infrastructure, Jim Horning and Peter G. Neumann, June 2008
  • The Physical World and the Real World, Steven M. Bellovin, May 2008
  • A Current Affair, Lauren Weinstein, April 2008
  • Wireless Sensor Networks and the Risks of Vigilance, Xiaoming Lu and George Ledin Jr, March 2008
  • Software Transparency and Purity, Pascal Meunier, February 2008 The Psychology of Risks, Dr. Leonard S. Zegans, January 2008
  • ========================================================

    Inside Risks 252, CACM 64, 6, June 2021

    Toward Total-System Trustworthiness



    Inside Risks 251, CACM 63, 6, June 2021

    The Risks of Election Believability (or Lack Thereof),



    Inside Risks 249, CACM 62, 6, June 2020

    How to Curtail Oversensing in the Home: Limiting sensitive information leakage via smart-home sensor data.

    Connor Bolton, Kevin Fu, Josiah Hester, and Jun Han, June 2020.



    Inside Risks 248, CACM 62, 2, February 2020

    Are You Sure Your Software Will Not Kill Anyone? Using software to control potentially unsafe systems requires the use of new software and system engineering approaches.

    Nancy Leveson, February 2020.



    Inside Risks 247, CACM 61, 10, October 2019

    How Might We Increase System Trustworthiness? Summarizing some of the changes that seem increasingly necessary to address known system and network deficiencies and anticipate currently unknown vulnerabilities,

    Peter G. Neumann, October 2019.



    Inside Risks 246, CACM 61, 6, June 2019

    Through Computer Architecture, Darkly: Total-system hardware and microarchitectural issues are becoming increasingly critical

    A. Theodore Markettos, Robert N.M. Watson, Simon W. Moore, Peter Sewell, and Peter G. Neumann, June 2019.



    Inside Risks 245, CACM 60, 11, November 2018

    The Big Picture: A systems-oriented view of trustworthiness,

    Steven Bellovin and Peter G. Neumann



    Inside Risks 244, CACM 60, 6, June 2018

    Risks of Cryptocurrencies

    Nicholas Weaver



    Inside Risks 243, CACM 60, 02, February 2018

    Risks of Trusting the Physics of Sensors: Protecting the Internet of Things with embedded security

    Kevin Fu and Wenyuan Xu



    Inside Risks 242, CACM 59, 10, October 2017

    The Real Risks of Artifical Intelligence, Incidents form the early days of AI research are instructive in the current AI environment

    David Lorge Parnas



    Inside Risks 241, CACM 59, 6, June 2017

    Trustworthiness and Truthfulness Are Essential: Their absence can introduce huge risks

    Peter G. Neumann



    Inside Risks 240, CACM 59, 2, February 2017

    The Future of the Internet of Things: The IoT can become ubiquitous worldwide -- if the pursuit of systemic -- trustworthiness can overcome the potential risks

    Ulf Lindqvist and Peter G. Neumann



    Inside Risks 239, CACM 58, 10, October 2016

    Risks of Automation: A Cautionary Total-System Perspective of Our Cyberfuture

    Peter G. Neumann



    Inside Risks 238, CACM 58, 6, June 2016

    The Risks of Self-Auditing Systems: Unforeseen problems can result from the absence of impartial independent evaluations.

    Rebecca T. Mercuri and Peter G. Neumann



    Inside Risks 237, CACM 58, 10, October 2015

    Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications

    Harold Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Michael A. Specter, Daniel J. Weitzner



    Inside Risks 236, CACM 58, 6, June 2015

    Routing Money, Not Packets: Revisiting Network Neutrality

    Vishal Misra



    Inside Risks 235, CACM 58, 2, February 2015

    Far-Sighted Thinking about Deleterious Computer-Related Events: Considerably more anticipation is needed for what might seriously go wrong.

    Peter G. Neumann



    Inside Risks 234, CACM 57, 10, October 2014

    Risks and Myths of Cloud Computing and Cloud Storage: Considering existing and new types of risks inherent in cloud services.

    Peter G. Neumann



    Inside Risks 233, CACM 57, 6, June 2014

    EMV: Why Payment Systems Fail: What lessons might we learn from the chip cards used for payments in Europe, now that the U.S. is adopting them too?

    Ross Anderson and Steven Murdoch



    Inside Risks 232, CACM 57, 2, February 2014

    An Integrated Approach to Safety and Security Based on System Theory: Applying a more powerful new safety methodology to security risks

    Nancy Leveson and William Young



    Inside Risks 232, CACM 56, 10, October 2013

    Controlling for Cybersecurity Risks of Medical Device Software: Medical device hacking is a red herring. But the flaws are real.

    Kevin Fu and James Blum



    Inside Risks 230, CACM 56, 6, June 2013

    Learning from the Past to Face the Risks of Today,

    Nancy Leveson



    Inside Risks 229, CACM 56, 2, February 2013

    More Sight on Foresight: Reflecting on elections, natural disasters, and the future,

    Peter G. Neumann



    Inside Risks 228, CACM 55, 10, October 2012

    The Foresight Saga, Redux: Short-term thinking is the enemy of the long-term future

    Peter G. Neumann



    Inside Risks 227, CACM 55, 6, June 2012

    The Cybersecurity Risk: Increased attention to cybersecurity has not resulted in improved cybersecurity.

    Simson Garfinkel



    Inside Risks 226, CACM 55, 2, February 2012

    Yet Another Technology Cusp: Confusion, Vendor Wars, and Opportunities

    Don Norman



    Inside Risks 225, CACM 54, 10, October 2011

    Modernizing the Danish Democratic Process

    Carsten Shürmann

    Examining the socio-technological issues involved in Denmark's decision to pursue the legalization of electronic elections.

    On March 3, 2009 the German Supreme Court decided that the use of electronic voting machines in parliamentary elections is unconstitutional as long as it is not possible for citizens to exercise their right to inspect and verify the essential steps of the election. The verdict did not rule voting machines unconstitutional, but the particular election law that legitimized their use during the 2005 election. Electronic elections, like any traditional election, must be under public control. And as this has not been achieved yet, the Supreme Court decision effectively outlawed the use of e-voting machines for German parliamentary elections, at least for now.

    European E-Voting Initiatives

    One might think that events as such would have slowed down the efforts of other European nations to push e-voting technology into polling stations or even homes. This, however, is not the case. Many European countries are newly invigorated, running experiments with voter registration, vote casting, and vote tallying. Switzerland, a direct democracy, legalized Internet elections in 2009. Norway used a newly developed online voting system for its parliamentary elections on September 12, 2011. In 2005, Estonians were the first permitted to vote from their homes, using their national ID cards and off-the-shelf smartcard readers connected to their computers to authenticate themselves.

    But why would governments advocate this kind of technology, risking decades of democratic achievements, in what seemingly contradicts common sense? There is more to this discussion than meets the eye. Governments and administrations currently revisit former decisions on how to implement the voting process and view them in the new light of information technology. Modern mobile devices such as smartphones, for example, interact and tinker with the very assumptions that secret and free elections are built upon. Off-the-shelf scanning technology can be used to identify individual sheets of paper simply by the composition of their fibers. It is also easy to take and transmit a photo or even a live video of the vote-casting process. European nations are pushing forward with the adoption of electronic and even Internet voting architectures, because their governments feel the risks of staying with the status quo outweigh the risks associated with modernizing the democratic process. I believe these European initiatives are healthy, necessary, and natural. The evolution of the democratic process must not come to a standstill just because serious challenges lie ahead.


    There are only very few countries in Europe that have resisted (so far) the urge to jump on the e-voting bandwagon-among those is Denmark, a small country of slightly more than 5.5 million people. Denmark has a long history of democracy in Europe and is top-ranked according to the information society index (ISI)-an index often quoted for comparing countries according to their ability to access and absorb information and information technology. E-voting is not banned by law, no trials have been conducted on the national level to date, citizens generally respect and trust their government and politicians, and there is an educated electorate with a pervasive desire for fairness, openness, and equality.

    Earlier this year, the Danish Board of Technology, an advisory committee to the government, released a report recommending Denmark to take initial exploratory steps toward research in and experimentation with e-voting technology, with the goal to improve the implementation of constitutional law. One aspect of the recommendations includes requiring a secret and free vote for all-not just those who are able to see, read, write, or visit the polling station on Election Day. The report even went so far as to propose an experimental deployment of mobile networked voting vans, visiting the elderly and those with disabilities on Election Day. There is little to disagree with: information technology can make elections more inclusive, more lawful, and also more convenient for those who run them.

    Denmark has a long tradition of public control in nationwide elections. On Election Day, volunteers assume the role of election observers who oversee the entire voting process from early morning to late at night. They meet, look over each others' shoulders, tally and recount every vote as required by law, and by doing so, create much of the overall trust in the Danish voting process. Unfortunately, the numbers of volunteer election observers are dwindling, which is problematic as currently the demand exceeds supply. Denmark's municipalities do not expect this trend to change soon, but hope instead for information technology to complement the smaller corps of election observers.

    Therefore, Denmark will sooner or later follow its European partners and jump on the bandwagon-at least for casting votes. Since 1984, the final result of an election-the number of seats awarded to each party in Parliament-is calculated by a computer program that runs on a Unix workstation located at the Ministry of the Interior and Health. The results computed that way are legally binding. For casting the vote, on the other hand, Danish law needs to be changed, because in its current form, it is prohibitively restrictive. All details are regulated, including how to design a ballot form and how to distinguish valid from invalid votes.

    Denmark has evolved its voting laws over many decades. It became a constitutional democracy in 1849, ballots were made secret in 1901, women obtained the right to vote in 1915, ballots were allowed to be cast by letter since 1920, Danes living abroad obtained the right to vote in 1970, and in 1978 the legal voting age was set to 18. In 2009, the voting law was changed to require that any visually impaired voter must be accompanied by an election observer when casting a vote. The law will soon evolve to accommodate information technology, and this will happen in the spirit of the Danish tradition of participatory design. Decision makers, administrators, and (computer, social, and political) scientists, will work together in the best interest of democracy.

    In this, I believe, Denmark stands apart from many other countries that are currently introducing new voting laws, new voting culture, and new voting technologies without listening to the voices of scientists and other specialists. Decisions are all too often made by politicians, administrators, and industry alone, without properly attempting to understand the nature of the technological challenges, their vulnerabilities, and their effects on the trust of the voters and society as a whole. Scientists carry a large socio-technological responsibility and must be heard. Given the opportunity, they will act on behalf of the public, play the role of the independent auditor, keep an eye on the innovation and improvements of the democratic process, and increase public trust.

    Defining a Metric for Success

    Denmark's decision to pursue the legalization of electronic elections will to a large part depend on the implementation of the recommendations outlined in the report of the Danish Board of Technology and the success of the suggested trials. A few key groups will keep a close eye on these trials: The Ministry of the Interior and Health, which is responsible for their constitutionality; municipalities, who are responsible for a lawful, smooth, and efficient implementation; suppliers, who are responsible for the quality of the voting solution; and scientists who are responsible for ensuring the deployed technology serves the best interest of the public. Therefore, decision makers will be confronted with conflicting opinions about whether or not a trial was successful. It is thus prudent that all groups get together ahead of time and define a coherent metric for measuring success.

    It might seem like an obvious task to tackle, but it is not at all clear how to define this metric. Abstract concepts, such as trust, belief, and perception are difficult to model logically or mathematically; however, they need to be brought together with the formal aspects of hardware, software, and engineering in a meaningful way. Elections are cyber-social systems.

    The usual indicators (such as voter turnout or opinion polls) are inadequate to measure success. Voter turnout, which is consistently above 85% in Denmark, is too infrequent a measure to be useful. Opinion polls that may be conducted more frequently are usually too volatile to be indicative. In February 2011, a poll conducted by the Danish newspaper Børsen showed that 63% (sample size 1,053) of Danish citizens of all ages would happily vote electronically even if it meant they must authenticate using a personal digital signature.

    Instead, the metric must mirror the scientific evaluation of technical and social observations collected during the trial. It must measure the functional correctness of the voting infrastructure: how well the final result matches the voters' intent, how well privacy and secrecy are secured, and to what extend Danish voting culture is preserved. This is where the real challenge lies. Even for simple election schemes, such as winner-take-all, technological solutions consist of many complex and communicating components that interface in various ways with election officials and voters. Scientists need to convince themselves that the system will always perform according to specification, even under the most obscure and unlikely circumstances including software bugs, malicious hacker attacks, or power outages. Furthermore, they must be sure the collective trust of the population is not negatively affected and that there are clear and accessible mechanisms to exercise public control. Electronic elections in controlled environments have thus a much better chance for success than elections in uncontrolled environments, such as Internet-based (remote) elections, for which there are still more open problems than solved ones. A modernized voting system does not need to be perfect, but it should implement a process that is at least as trustworthy as the one we know today.

    Finally, the metric must reflect the operational aspects of carrying out the electronic election. Election observers follow new protocols, initialize new machines and read out final results, handle unfamiliar physical evidence, and respond to unknown and unforeseen problems, ranging from hardware and software failures to denial-of-service attacks. For the trials, election observers must be adequately prepared, and their reactions and experiences must be carefully documented and evaluated.


    With the right metric in place, Denmark will be in an excellent position to begin a rigorous scientific analysis of various voting schemes, technologies, and platforms. Politicians, administrators, and even suppliers have already signaled their willingness to cooperate and have promised scientists access to all parts of the election. If we do things right, Denmark will not repeat past mistakes of other nations, but its solution may serve as a guide for how to define future democratic processes.

    Carsten Schürmann ( is an associate professor at the IT University of Copenhagen, where he is leading the DemTech research project that aims to modernize the Danish democratic process.


    Inside Risks 224, CACM 54, 6, June 2011

    The Risks of Stopping Too Soon

    David L. Parnas

    CLICK HERE FOR pdf that does not include the artwork (``primitive buzz-diagram sighted in the wild -- the meaning of which eluded capture.'').


    Inside Risks 223, CACM 54, 2, 32--34, February 2011

    The Growing Harm of Not Teaching Malware

    George Ledin, Jr.



    Inside Risks 222, CACM 53, 10, 25--27, October 2010

    Risks of Undisciplined Development

    David L. Parnas



    Inside Risks 221, CACM 53, 6, June 2010

    Privacy by Design: Moving from Art to Practice

    Stuart S. Shapiro

    Most people involved with system development are well aware of the adage that you are better off designing in security and privacy (and pretty much any other `non-functional' requirements) from the start, rather than trying to add them later. Yet, if this is the conventional wisdom, why is the conventional outcome still so frequently systems with major flaws in these areas?

    Part of the problem is that while people know how to talk about functionality, they are typically a lot less fluent in security and privacy. They may sincerely want security and privacy, but they seldom know how to specify what they seek. Specifying functionality, on the other hand, is a little more straightforward, and thus the system that previously could make only regular coffee in addition to doing word processing, will now make espresso too. (Whether this functionality actually meets user needs is another matter.)

    The fact that it is often not apparent what security and privacy should look like points to some deeper issues. Security and privacy tend to be articulated at a level of abstraction that often makes their specific manifestations less than obvious, to either customers or system developers.

    This is not to say that the emperor has no clothes; far from it. There are substantial bodies of knowledge for some non-functional areas, including security, but figuring out how to translate the abstract principles, models, and mechanisms into comprehensive specific requirements for specific systems operating within specific contexts is seldom straightforward. That translation process is crucial to designing these properties into systems, but it also tends to be the most problematic activity and the activity for which the least guidance is provided. The sheer complexity of most modern systems compounds the problem.

    Security, though, is better positioned than privacy. Privacy---or informational privacy at least---certainly has commonly understood and accepted principles in the form of Fair Information Practices. It presently doesn't have much else. Models and mechanisms that support privacy are scarce, not generally known, and rarely understood by either customers or developers.

    As more and more things become digitized, informational privacy increasingly covers areas for which Fair Information Practices were never envisioned. Biometrics, physical surveillance, genetics, and behavioral profiling are just a few of the areas that are straining Fair Information Practices to the breaking point. More sophisticated models are emerging for thinking about privacy risk, as represented by the work of scholars such as Helen Nissenbaum and Daniel Solove. However, if not associated with privacy protection mechanisms and supported by translation guidance, the impact of such models is likely to be much less than they deserve.

    A salient recent example is the development and deployment of whole-body imaging (WBI) machines at airports for physical screening of passengers. In their original incarnation, these machines perform what has been dubbed a `virtual strip search' due to the body image that is presented. These machines are currently being deployed at U.S. airports in a way which is arguably compliant with Fair Information Practices. Yet they typically operate in a way that many people find offensive.

    The intended purpose certainly is not to collect, use, disclose, and retain naked images of people; it is to detect potentially dangerous items they may be carrying on their persons at that moment in time. Fair Information Practices include minimization of personal information collected, used, disclosed, and retained, consistent with the intended purpose.

    This has profound implications for how image data is processed, presented, and stored. It should be processed so at no point does there ever exist an exposed body image that can be viewed or stored. It should be presented in a non-exposed form (chalk outline, fully-clothed person, etc.) with indicators where things have been detected. None of it should be retained beyond the immediate encounter. That almost none of these design elements were originally specified illustrates what too often happens in the absence of applicable models and mechanisms and their requisite translation, along with principles, into effective requirements.

    In this instance, Solove's concept of exposure provides the necessary (partial) model. Exposure is a privacy violation that induces feelings of vulnerability and distress in the individual by revealing things we customarily conceal. The potential harm from exposure is not restricted to modesty or dignity. A friend is convinced that her pubescent daughter, who is currently extremely self-conscious about her body, would be quite literally traumatized if forced to undergo WBI. If physical strip searches would raise concern, why not WBI? Real damage, physical as well as psychological, can occur in the context of body image neuroses.

    If one recognizes from the outset the range of privacy risks represented by exposure, and the relevance of exposure for WBI, one then stands a chance of effectively moving from principles to requirements. Even then, though, the translation process is not necessarily obvious.

    Supposedly, WBI machines being used by the U.S. Transportation Security Administration are not capable of retaining images when in normal operating mode. (They have this capability when in testing mode, though, so significant residual risk may exist.) Other necessary mechanisms were not originally specified. Some models of WBI are being retrofitted to present a non-exposed image, but the issue of intermediate processing remains. Some models developed after the initial wave apparently implement all the necessary control mechanisms; privacy really was designed in. Why wasn't it designed in from the beginning and across the board? The poor state of practice of privacy by design offers a partial explanation. The state of the art, though, is advancing.

    The importance of designing meaningful privacy into systems at the beginning of the development process, rather than bolting it on at the end (or overlooking it entirely), is being increasingly recognized in some quarters. A number of initiatives and activities are using the rubric of privacy by design. In Canada, the Ontario Information and Privacy Commissioner's Office has published a number of studies and statements on how privacy can be designed into specific kinds of systems. One example is electronic (i.e., RFID-enabled) driver¹s licenses, for which the inclusion of a built-in on/off switch is advocated, thereby providing individuals with direct, immediate, and dynamic control over whether the personal information embedded in the license can be remotely read or not. Such a mechanism would support several Fair Information Practices, most notably collecting personal information only with the knowledge and consent of the individual. This approach is clearly applicable as well to other kinds of RFID-enabled cards and documents carrying personal information.

    Similar efforts have been sponsored by the U.K. Information Commissioner's Office. This work has taken a somewhat more systemic perspective, looking less at the application of privacy by design to specific types of technology and more at how to effectively integrate privacy into the system development life cycle through measures such as privacy impact assessments and `practical' privacy standards. It also emphasizes the potential role of privacy-enhancing technologies (PETs) that can be integrated with or into other systems. While some of these are oriented toward empowering individuals, others---which might more appropriately be labeled Enterprise PETs---are oriented toward supporting organizational stewardship of personal information.

    However, state of the art is state of the art. Supporting the translation of abstract principles, models, and mechanisms into implementable requirements, turning this into a repeatable process, and embedding that process in the nsystem-development life cycle is no small matter. Security has been at it a lot longer than privacy, and it is still running into problems. But at least security has a significant repertoire of principles, models, and mechanisms. Privacy has not really reached this stage yet.

    So, if privacy by design is still a ways off, and security by design still leaves something to be desired, how do we get there from here? There's little doubt that appropriately trained engineers (including security engineers) are key to supporting the effective translation of principles, models, and mechanisms into system requirements. There doesn't yet appear to be such a thing as a privacy engineer; given the relative paucity of models and mechanisms, that¹s not too surprising. Until we build up the latter, we won't have a sufficient basis for the former. For privacy by design to extend beyond a small circle of advocates and experts and become the state of practice, we'll need both.

    This will require recognition that there is a distinct and necessary technical discipline of privacy, just as there is a distinct and necessary technical discipline of security---even if neither is fully formed. If that can be accomplished, it will create a home and an incentive for the models and mechanisms privacy by design so badly needs.

    This is not to minimize the difficulty of more effectively and consistently translating security's body of knowledge (which is still incomplete) into implementable and robust requirements. Both security and privacy need to receive more explicit and directed attention than they often do as areas of research and education.

    Security by design and privacy by design can be achieved only by design. We need a firmer grasp of the obvious.

    Stuart S. Shapiro is Principal Information Privacy and Security Engineer at The MITRE Corporation, Bedford MA.


    Inside Risks 220, CACM 53, 2, February 2010

    The Need for a National Cybersecurity Research and Development Agenda

    Douglas Maughan

    [Inside Risks articles over the past two decades have frequently been concerned with trustworthiness of computer-communication systems and the applications built upon them. We consider here what is needed to attain considerable new progress toward avoiding the risks that have prevailed in the past as we address a national cybersecurity R&D agenda. Although the author writes from the perspective of someone deeply involved in research and development of trustworthy systems in the U.S. Department of Homeland Security, what is described is applicable much more universally. The risks of not doing what is described are very significant.]

    Cyberspace is the complex, dynamic, globally-interconnected digital and information infrastructure that underpins every facet of American society and provides critical support for our personal communication, economy, civil infrastructure, public safety, and national security. Just as our dependence on cyberspace is deep, so too must be our trust in cyberspace, and we must provide technical and policy solutions that enable four critical aspects of trustworthy cyberspace: security, reliability, privacy and usability.

    The United States and the world at large are at a significant decision point. We must continue to defend our current systems and networks. At the same time, we must attempt to be ahead of our adversaries, and ensure that future generations of technology will position us to better protect our critical infrastructures and respond to attacks from our adversaries. Government-funded research and development must play an increasing role to enable us to accomplish this goal of national and economic security.


    On January 8, 2008, National Security Presidential Directive 54/Homeland Security Presidential Directive 23 formalized the Comprehensive National Cybersecurity Initiative (CNCI) and a series of continuous efforts designed to establish a frontline defense (reducing current vulnerabilities and preventing intrusions), defending against the full spectrum of threats by using intelligence and strengthening supply chain security, and shaping the future environment by enhancing our research, development, and education, as well as investing in `leap-ahead' technologies.

    No single federal agency `owns' the issue of cybersecurity. In fact, the federal government does not uniquely own cybersecurity. It is a national and global challenge with far-reaching consequences that requires a cooperative, comprehensive effort across the public and private sectors. However, as it has done historically, the U.S. Government R&D community, working in close cooperation with private-sector partners in key technology areas, can jump-start the necessary fundamental technical transformation our nation requires.


    The Federal government needs to re-energize two key partnerships if we are to be successful in securing the future cyberspace: the partnership with the educational system and the partnership with the private sector. The Taulbee Survey [1] has shown that our current educational system is not producing the cyberspace workers of the future and that the current public-private partnerships are inadequate for taking R&D results and deploying them across the global infrastructure.


    A serious, long-term problem, with ramifications for national security and economic growth, is looming: America is not producing enough U.S. citizens with Computer Science (CS) and Science, Technology, Engineering, and Mathematics (STEM) degrees. The decline in CS enrollments and degrees is most acute. The decline in undergraduate CS degrees portends the decline in masters and doctoral degrees as well. Enrollments in major university CS departments have fallen sharply in the last few years, while the demand for computer scientists and software engineers is high and growing. The Taulbee Survey [1] confirmed that CS (including computer engineering) enrollments are down fifty percent from only five years ago, a precipitous drop by any measure. Since CS degrees are a subset of the overall requirement for STEM degrees and show the most significant downturn, CS degree production can be considered as a bellwether to the overall condition and trend of STEM education. The problems with other STEM degrees are equally disconcerting and require immediate and effective action. At the same time, STEM jobs are growing, and CS jobs are growing faster than the national average.

    At a time when the U.S. is attacked daily and as global competition continues to increase, the U.S. cannot afford continued ineffective educational measures and programs. Re-vitalizing educational systems can take years before results are seen. As part of an overall national Cybersecurity R&D agenda, the U.S. must incite an extraordinary shift in the number of students in STEM education quickly to avoid a serious shortage of computer scientists, engineers and technologists in the decades to come.

    Public-Private Partnerships

    Information and communications networks are largely owned and operated by the private sector, both nationally and internationally. Thus, addressing cybersecurity issues requires public-private partnerships as well as international cooperation. The public and private sectors' interests are dependent on each other and share a responsibility for ensuring a secure, reliable infrastructure. As the Federal government moves forward to enhance its partnerships with the private sector, research and development must be included in the discussion. More and more private sector R&D is falling by the wayside and, therefore, it is even more important that government funded R&D can make its way to the private sector, given they design, build, own, and operate most of the critical infrastructures.

    Technical Agenda

    Over the past decade there have been a significant number of R&D agendas published by various academic and industry groups, and government departments and agencies. [2] A 2006 Federal R&D Plan identified at least 8 areas of interest with over 50 project topics that were either being funded or should be funded by Federal R&D entities. Many of these topic areas have been on the various lists for over a decade. Why? Because we have under-invested in these R&D areas as a nation, both within the government and private R&D communities.

    The Comprehensive National Cyber Initiative (CNCI) and the President's Cyberspace Policy Review [4] challenged the federal networks and IT research community to figure out how to `change the game' to address these technical issues. Over the past year, through the National Cyber Leap Year (NCLY) Summit and a wide range of other activities the government research community sought to elicit the best ideas from the Research and Technology community.

    The vision of the CNCI research community over the next 10 years is to ``transform the cyber-infrastructure to be resistant to attack so that critical national interests are protected from catastrophic damage and our society can confidently adopt new technological advances.''

    The leap-ahead strategy aligns with the consensus of the nation's networking and cybersecurity research communities: That the only long-term solution to the vulnerabilities of today's networking and information technologies is to ensure that future generations of these technologies are designed with security built in from the ground up. Federal agencies with mission-critical needs for increased cybersecurity, which includes information assurance as well as network and system security, can play a direct role in determining research priorities and assessing emerging technology prototypes.

    The Science and Technology (S&T) Directorate of the U.S. Department of Homeland Security (DHS) has published its own roadmap [3] in an effort to provide more R&D direction for the community. The Cybersecurity Research Roadmap [3] addresses a broad R&D agenda that is required to enable us to produce the technologies that will protect our information systems and networks in the future. The document provides detailed research and development agendas relating to 11 hard problem areas in cybersecurity, for use by agencies of the U.S. Government. The research topics in this roadmap, however, are relevant not just to the governments, but also to the private sector and anyone else funding or performing R&D.

    While progress in any of the areas identified in previous reports noted above would be valuable, I believe the `top ten' list consists of the following (with short rationale included):

    1.Software Assurance - poorly written software is at the root of all of our security problems
    2.Metrics - we can't measure our systems, thus we cannot manage them
    3.Usable Security - information security technologies have not been deployed because they are not easily usable
    4.Identity Management - the ability to know who you're communicating with will help eliminate many of today's online problems, including attribution
    5.Malware - today's problems continue because of a lack of dealing with malicious software and its perpetrators
    6.Insider Threat - one of the biggest threats to all sectors that has not been adequately addressed
    7.Hardware Security - today's computing systems can be improved with new thinking about the next generation of hardware built from the start with security in mind
    8.Data Provenance - data has the most value, yet we have no mechanisms to know what has happened to data from its inception
    9.Trustworthy Systems - current systems are unable to provide assurances of correct operation to include resiliency
    10.Cyber Economics - we do not understand the economics behind cybersecurity for either the good guy or the bad guy.

    Lifecycle of Innovation

    R&D programs, including cyber security R&D, consistently have difficulty in taking the research through a path of development, testing, evaluation, and transition into operational environments. Past experience shows that transition plans developed and applied early in the life cycle of the research program, with probable transition paths for the research product, are effective in achieving successful transfer from research to application and use. It is equally important, however, to acknowledge that these plans are subject to change and must be reviewed often. It is also important to note that different technologies are better suited for different technology transition paths and in some instances the choice of the transition path will mean success or failure for the ultimate product. There are guiding principles for transitioning research products. These principles involve lessons learned about the effects of time/schedule, budgets, customer or end-user participation, demonstrations, testing and evaluation, product partnerships, and other factors.

    A July 2007 Department of Defense Report to Congress on Technology Transition noted there is evidence that a chasm exists between the DoD S&T communities and acquisition of a system prototype demonstration in an operational environment. DOD is not the only government agency that struggles with technology transition. That chasm, commonly referred to as the 'valley of death,' can be bridged only through cooperative efforts and investments by both research and acquisition communities.

    There are at least 5 canonical transition paths for research funded by the Federal Government. These transition paths are affected by the nature of the technology, the intended end-user, participants in the research program, and other external circumstances. Success in research product transition is often accomplished by the dedication of the program manager through opportunistic channels of demonstration, partnering, and sometimes good fortune. However, no single approach is more effective than a proactive technology champion who is allowed the freedom to seek potential utilization of the research product. The 5 canonical transition paths are detailed below.

    Department/Agency direct to Acquisition
    Department/Agency to Government Lab
    Department/Agency to Industry
    Department/Agency to Academia to Industry
    Department/Agency to Open Source Community

    In order to achieve the full results of R&D, technology transfer needs to be a key consideration for all R&D investments. This requires the federal government to move past working models where most R&D programs support only limited operational evaluations and experiments. In these old working models, most R&D program managers consider their job done with final reports, and most research performers consider their job done with publications. In order to move forward, Government-funded R&D activities need to focus on the real goal: technology transfer, which follows transition. Current R&D Principal Investigators (PIs) and Program Managers (PMs) aren't rewarded for technology transfer. Academic PIs are rewarded for publications, not technology transfer. The government R&D community needs to reward government program managers and PIs for transition progress.


    As noted in [4], we need to produce an updated national strategy for securing cyberspace. Research and Development must be a full partner in that discussion. It is only through `innovation creation' that our nation can regain its position as a leader in cyberspace.

    References 1. Taulbee Survey 2006-2007, Computer Research Association, May 2008 Computing Research News, Vol. 20, No. 3.
    2. These documents can be found online:
    3. A Roadmap for Cybersecurity Research, Science and Technology Directorate, U.S. Department of Homeland Security, November 2009.
    4. White House Cyberspace Policy Review:

    Douglas Maughan ( is Program Manager, Cyber Security R&D, Command, Control and Interoperability Division, Science and Technology Directorate, U.S. Department of Homeland Security, Washington, DC.


    Inside Risks 219, CACM 52, 10, October 2009

    Reflections on Conficker: An insider's view of the analysis and implications of the Conficker conundrum

    Phillip Porras

    Conficker is the name applied to a sequence of malicious software. It initially exploited a flaw in Microsoft software, but has undergone significant evolution since (versions A through E thus far). This article summarizes some of the most interesting aspects of it, from the viewpoint of an insider who has been analyzing it. The lessons for the future are particularly relevant.

    In mid-February 2009, something unusual, but not unprecedented, occurred in the malware defense community. Microsoft posted its 5th bounty on the heads of those responsible for one of the latest multi-million node malware outbreaks[4]. (Previous bounties have included Sasser (awarded), Mydoom, MSBlaster and Sobig.) Conficker's alarming growth rate in early 2009, along with the apparent mystery surrounding its ultimate purpose, had raised enough concern among whitehats that reports were now flowing out to the public and raising concerns in Washington DC.

    Was it all hype and of relative small importance among an ever increasing stream of new and sophisticated malware families? What weaknesses in the ways of the Internet had this botnet brought into glaring focus? Why was it here and when would it end? More broadly, why do some malware outbreaks garner wide attention while other multi-million victim epidemics (such as Seneka) gather little notice? All are fair questions, and to some degree do still remain open.

    In several ways, Conficker was not fundamentally novel. The primary infiltration method used by Conficker to infect PCs around the world was known well before Conficker began to spread in late November 2008. The earliest accounts of the Microsoft Windows buffer overflow used by Conficker arose in early September 2008, and a patch to this vulnerability had been been distributed nearly a month before Conficker was released. Neither was Conficker the first to introduce dynamic domain generation as a method for selecting the daily Internet rendezvous points used to coordinate its infected population. Prior malware such as Bobax, Kraken, and more recently Torpig and a few other malware families, have used dynamic domain generation as well. Conficker's most recent introduction of an encrypted peer-to-peer (P2P) channel to upgrade its ability to rapidly disseminate malware binaries is also preceded by other well established kin, Storm worm being perhaps the most well known example.

    Nevertheless, among the long history of malware epidemics, very few can claim sustained worldwide infiltration of multiple millions of infected drones. The rapidly evolving set of state-of-the-art Conficker variants do represent the start-of-the-art in advanced commercial Internet malware, and provide several valuable insights for those willing to look close.

    What Have We Learned?

    Nearly from its inception, Conficker demonstrated just how effective a random scanning worm can take advantage of the huge world-wide pool of poorly managed and unpatched Internet-accessible computers. Even on those occasions when patches are diligently produced, widely publicized, and auto-disseminated by operating system (OS) and application manufacturers, Conficker demonstrates that millions of Internet-accessible machines may remain parmanently vulnerable. In some cases, even security-conscious environments may elect to forgo automated software patching, choosing to trade off vulnerability exposure for some perceived notion of platform stability [7].

    Another lesson of Conficker is the ability of malware to manipulate the current facilities through which Internet name space is governed. Dynamic domain generation algorithms (DGAs), along with fast flux (domain name lookups that translate to hundreds or thousands of potential IPs), are increasingly adopted by malware perpetrators as a retort to the growing efficiency with which whitehats were able to behead whole botnets by quickly identifying and removing their command and control (C&C) sites and redirecting all bot client links. While not an original concept, Conficker's DGA produced a new and unique struggle between Conficker's authors and the whitehat community, who fought for control of the daily sets of domains used as Conficker's Internet rendezvous points [3].

    Yet another lesson from the study of Conficker is the ominous sophistication with which modern malware is able to terminate, disable, reconfigure, or blackhole native OS and third-party security services [6]. Today's malware truly poses a comprehensive challenge to our legacy host-based security products, including Microsoft's own anti-malware and host recovery technologies. Conficker offers a nice illustration of the degree to which security vendors are challenged to not just hunt for malicious logic, but to defend their own availability, integrity, and the network connectivity vital to providing them a continual flow of the latest malware threat intelligence. To address this concern, we may eventually need new OS services (perhaps even hardware support) specifically designed to help third-party security applications maintain their foothold within the host.

    What Is Conficker's Purpose?

    Perhaps one of the main reasons why Conficker had gained so much early attention was our initial lack of understanding of why it was there. From analyzing its internal binary logic, there is little mystery to Conficker. It is, in fact, a robust and secure distribution utility for disseminating malicious binary applications to millions of computers across the Internet. This utility incorporates a potent arsenal of methods to defend itself from security products, updates, and diagnosis tools. Its authors maintain a rapid development pace and defend their current foothold on their large pool of Internet-connected victim hosts.

    Nevertheless, knowing what it can do does not tell us why it is there. What did they plan to send to these infected drones and for what purpose? Early speculation included everything from typical malware business models (spam, rogueAV, host trading, data theft, phishing), to building the next `Dark Google' [5], to fears of full-fledged nation-state information warfare. In some sense, we are fortunate that it now appears that Conficker is currently being used as a platform for conducting wide-scale fraud, SPAM, and general Internet misuse (rather traditional uses with well understood motives). As recently as April 2009, Conficker infected hosts have been observed downloading Waledac from know Waledac server sites, which is known to server SPAM. It has also been observed installing rogue antivirus fraudware, which has proven a lucrative business for malware developers [3].

    Is Conficker Over?

    From October to April 2009, Conficker's authors had produced five major variants, lettered A through E---a development pace that would rival most Silicon Valley startups. With the exception of Conficker's variant E, which appeared in April and committed suicide on May 5th, Conficker is here to stay, barring some significant organized erradication campaign that goes well beyond security patches. Unlike traditional botnets that lay dormant until instructed, Conficker hosts operate with autonomy, independently and permanently scanning for new victims, and constantly seeking out new coordination points (i.e., new Internet rendezvous points and peers for its P2P protocol). However, despite their constant hunt for new victims, our Conficker A and B daily census (C is an overlay on prior-infected hosts) appears to have stabilized at between 5 and 5.5M unique IP (as of July 2009) [1]. Nevertheless, any new exploit (i.e., a new propagation method) that Conficker's authors decide to distribute, is but one peer exchange away from every current Conficker victim. It is most probable that Conficker will remain a profitable criminal tool and relevant threat to the Internet for the foreseeable future.

    Is There Any Good News?

    Yes. Perhaps in ways that the Conficker developers have not foreseen and certainly to their detriment, Conficker has been some what of a catalyst to help unify a large group of professional and academic whitehats. Organized groups of whitehats on invitation-only forums have certainly previously self-organized to discuss and share insights. But Conficker brought together a focused group of individuals on a much larger scale with a clearly defined target, now called the Conficker Working Group (CWG) [1].

    The details of the CWG and its structure are outside the scope of this article, but the output from this group provides some insight into their capabilities. Perhaps its most visible action has been the CWGs efforts to work with TLD managers to block Internet rendezvous point domains from use by Conficker's authors. Additionally, group members have posted numerous detailed analyses of the Conficker variants, and have used this information to rapidly develop diagnosis and remediation utilities that are widely available to the public. They actively track the infected population, and have worked with organizations and governments to help identify and remove infected machines. They continue to provide government policy makers, the Internet governance community, and Internet data providers with information to better reduce the impact of future malware epidemics. Whether such organized efforts can be sustained and applied to new Internet threats has yet to be seen.


    1. Conficker Working Group. Home Page. June 2009.
    2. Jim Giles, The inside story of the conficker worm, New Scientist Journal, 12 June 2009.
    3. Brian Krebs, Massive Profits Fueling Rogue Antivirus Market, The Washinton Post, 16 March 2009.
    4. R. Lemos, Cabal forms to fight conficker, offers bounty, Security Focus, 13 February 2009.
    5. J. Markoff, The conficker worm: april fool's joke or unthinkable disaster? Bits: The New York Times, 19 March 2009.
    6. P.A. Porras, H. Saidi, and V. Yegneswaran, Conficker C analysis, SRI International, Technical Report, 4 April 2009.
    7. C. Williams, Conficker seizes city's hospital network, The Register (UK), 20 January 2009.
    Phillip Porras ( leads SRI International's Cyber Threat Analytics effort that includes the Malware Threat Center and BotHunter. An SRI report on the Conficker C Peer-to-Peer Protocol is online. ========================================================

    Inside Risks 218, CACM 52, 6, June 2009

    Reducing Risks of Implantable Medical Devices: A Prescription to Improve Security and Privacy of Implantable Medical Devices

    Kevin Fu

    Millions of patients benefit from programmable, implantable medical devices (IMDs) that treat chronic ailments such as cardiac arrhythmia [6], diabetes, and Parkinson's disease with various combinations of electrical therapy and drug infusion. Modern IMDs rely on radio communication for diagnostic and therapeutic functions -- allowing healthcare providers to remotely monitor patients' vital signs via the Web and to give continuous rather than periodic care. However, the convergence of medicine with radio communication and Internet connectivity exposes these devices not only to safety and effectiveness risks, but also to security and privacy risks. IMD security risks have greater direct consequences than security risks of desktop computing. Moreover, IMDs contain sensitive information with privacy risks more difficult to mitigate than that of electronic health records or pharmacy databases. This article explains the impact of these risks on patient care, and makes recommendations for legislation, regulation, and technology to improve security and privacy of IMDs.

    Consequences and Causes: Security Risks

    The consequences of an insecure IMD can be fatal. However, it is fair to ask whether intentional IMD malfunctions represent a genuine threat. Unfortunately, there are people who cause patients harm. In 1982, someone deliberately laced Tylenol capsules with cyanide and placed the contaminated products on store shelves in the Chicago area. This unsolved crime led to seven confirmed deaths, a recall of an estimated 31 million bottles of Tylenol, and a rethinking of security for packaging medicine in a tamper-evident manner. Today, IMDs appear to offer a similar opportunity to other depraved people. While there are no reported incidents of deliberate interference, this can change at any time. The global reach of the Internet and the promiscuity of radio communication expose IMDs to historically open environments with difficult to control perimeters [3,4]. For instance, vandals caused seizures in photo-sensitive patients by posting flashing animations on a Web-based epilepsy support group [1].

    Knowing that there will always exist such vandals, the next question is whether genuine security risks exist. What could possibly go wrong by allowing an IMD to communicate over great distances with radio and then mixing in Internet-based services? It does not require much sophistication to think of numerous ways to cause intentional malfunctions in an IMD. Few desktop computers have failures as consequential as that of an IMD. Intentional malfunctions can actually kill people, and are harder to prevent than accidental malfunctions. For instance, lifesaving therapies were silently modified and disabled via radio communication on an implantable defibrillator that had passed pre-market approval by regulators [3]. In my research lab, the same device was reprogrammed with an unauthenticated radio-based command to induce a shock that causes ventricular fibrillation (a fatal heart rhythm).

    Manufacturers point out that IMDs have used radio communication for decades, and that they are not aware of any unreported security problems. Spam and viruses were also not prevalent on the Internet during its many-decade childhood. Firewalls, encryption, and proprietary techniques did not stop the eventual onslaught. It would be foolish to assume that IMDs are any more immune to malware. For instance, if malware were to cause an IMD to continuously wake from power saving mode, the battery would wear out quickly. The malware creator need not be physically present, but could expose a patient to risks of unnecessary surgery that could lead to infection or death. Much like Mac users can take comfort in that most malware today takes aim at Windows applications, patients can take comfort in that IMDs seldom rely on such widely targeted software for now.

    Consequences & Causes: Privacy

    A second risk is violation of patient privacy. Today's IMDs contain detailed medical information and sensory data (vital signs, patient name, date of birth, therapies, medical diagnosis, etc.). Data can be read from an IMD by passively listening to radio communication. With newer IMDs providing nominal read ranges of several meters, eavesdropping will become easier. The privacy risks are similar to that of online medical records.


    Improving IMD security and privacy needs a proper mix of technology and regulation.

    Remedy: Technology

    Technological approaches to improving IMD security and privacy include judicious use of cryptography and limiting unnecessary exposure to would-be hackers.

    IMDs that rely on radio communication or have pathways to the Internet must resist a determined adversary [5]. IMDs can last upwards of 20 years, and doctors are unlikely to surgically replace an IMD just because a less-vulnerable one becomes available. Thus, technologists need to think 20 to 25 years out. Cryptographic systems available today may not last 25 years.

    It is tempting to consider software updates as a remedy for maintaining the security of IMDs. Because software updates can lead to unexpected malfunctions with serious consequences, pacemaker and defibrillator patients make an appointment with a healthcare provider to receive firmware updates in a clinic. Thus, it could take too long to patch a security hole.

    Beyond cryptography, several steps could reduce exposure to potential misuse. When and where should an IMD permit radio-based, remote reprogramming of therapies (e.g., changing the magnitude of defibrillation shocks)? When and where should an IMD permit radio-based, remote collection of telemetry (i.e., vital signs)? Well-designed cryptographic authentication and authorization make these two questions solvable. Does a pacemaker really need to accept requests for reprogramming and telemetry in all locations from street corners to subway stations? The answer is no. Limit unnecessary exposure.

    Remedy: Regulation

    Pre-market approval for life-sustaining IMDs should explicitly evaluate security and privacy -- leveraging the body of knowledge from secure systems and security metrics communities. Manufacturers have already deployed hundreds of thousands of IMDs without voluntarily including reasonable technology to prevent the unauthorized induction of a fatal heart rhythm. Thus, future regulation should provide incentives for improved security and privacy in IMDs.

    Regulatory aspects of protecting privacy are more complicated, especially in the United States. Although the U.S. Food and Drug Administration has acknowledged deleterious effects of privacy violations on patient health [2], there is no on-going process or explicit requirement that a manufacturer demonstrate adequate privacy protection. FDA itself has no legal remit from Congress to directly regulate privacy. (FDA does not administer HIPAA privacy regulations.)

    Call to Action

    My call to action consists of two parts legislation, one part regulation, and one part technology.

    First, legislators should mandate stronger security during pre-market approval of life-sustaining IMDs that rely on either radio communication or computer networking. Action at pre-market approval is crucial because unnecessary surgical replacement directly exposes patients to risk of infection and death. Moreover, the threat models and risk retention chosen by the manufacturer should be made public so that healthcare providers and patients can make informed decisions when selecting an IMD. Legislation should avoid mandating specific technical approaches, but instead should provide incentives and penalties for manufacturers to improve IMD security.

    Second, legislators should give regulators the authority to require adequate privacy controls before allowing an IMD to reach the market. FDA writes that privacy violations can affect patient health [2], and yet FDA has no direct authority to regulate privacy of medical devices. IMDs increasingly store large amounts of sensitive medical information and fixing a privacy flaw after deployment is especially difficult on an IMD. Moreover, security and privacy are often intertwined. Inadequate security can lead to inadequate privacy, and inadequate privacy can lead to inadequate security. Thus, device regulators have the unique vantage point for not only determining safety and effectiveness, but also determining security and privacy.

    Third, regulators such as FDA should draw upon industry, the healthcare community, and academics to conduct a thorough and open review of security and privacy metrics for IMDs. Today's guidelines leave so much wiggle room that an implantable cardioverter defibrillator with no apparent authentication whatsoever has been implanted in hundreds of thousands of patients [3].

    Fourth, technologists should ensure that IMDs do not continue to repeat the mistakes of history by underestimating the adversary, using outdated threat models, and neglecting to use cryptographic controls [5]. In addition, technologists should not dismiss the importance of usable security and human factors.


    There is no doubt that IMDs save lives. Patients prescribed such devices are much safer with the device than without, but IMDs are no more immune to security and privacy risks than any other computing device. Yet the consequences for IMD patients can be fatal. Tragically, it took seven cyanide poisonings for the pharmaceutical industry to redesign the physical security of its product distribution to resist tampering by a determined adversary. The security and privacy problems of IMDs are obvious, and the consequences just as deadly. We'd better get it right today, because surgically replacing an insecure IMD is much harder than an automated Windows update.

    [1] Epilepsy Foundation. "Epilepsy Foundation Takes Action Against Hackers." March 31, 2008.

    [2] FDA Evaluation of Automatic Class III Designation VeriChip(TM) Health Information Microtransponder System, October 2004.

    [3] D. Halperin et al. "Pacemakers and implantable cardiac defibrillators: Software radio attacks and zero-power defenses." In Proceedings of the 29th Annual IEEE Symposium on Security and Privacy, May 2008.

    [4] D. Halperin et al. "Security and privacy for implantable medical devices. " In IEEE Pervasive Computing, Special Issue on Implantable Electronics, January 2008.

    [5] B. Schneier, "Security in the Real World: How to Evaluate Security Technology," Computer Security Journal, 15 (4), 1999.

    [6] John G. Webster, ed. "Design of Cardiac Pacemakers." IEEE Press, 1995.

    Kevin Fu ( is an assistant professor of Computer Science at the University of Massachusetts Amherst.


    Inside Risks 217, CACM 52, 2, February 2009

    U.S. Election After-Math

    Peter G. Neumann

    Recounting problems still associated with election integrity and accountability

    From the perspective of computer-based election integrity, it was fortunate that the U.S. presidential electoral vote plurality was definitive. However, numerous problems remain to be rectified, including some experienced in close state and local races.

    Elections represent a complicated system with end-to-end requirements for security, integrity, and privacy, but with many weak links throughout. For example, a nationwide CNN survey for this election tracked problems in registration (26%), election systems (15%), polling place accessibility and delays (14%). Several specific problems are illustrative.


    * In Cleveland (Cuyahoga County) and Columbus (Franklin County), Ohio, many voters who had previously voted at their current addresses were missing from the polling lists; some were on the Ohio statewide database, but not on the local poll registry, and some of those had even received postcard notification of their registration and voting precinct. At least 35,000 voters had to submit provisional ballots. (Secretary of State Jennifer Brunner had rejected the use of a list of some 200,000 voters whose names did not identically match federal records.) Several other states attempted to disenfranchise voters based on nonmatches with databases whose accuracy is known to be seriously flawed.


    * In Kenton County, Kentucky, a judge ordered 108 Hart voting machines shut down because of persistent problems with straight-party votes not being properly recorded for the national races.

    * As in 2002 and 2004, voters (including Oprah Winfrey) reported touch-screen votes flipped from the touched candidate's name to another. Calibration and touching are apparently quite tricky.

    * In Maryland, Andrew Harris, a long-time advocate in the state senate for the avoidance of paperless touch-screen and other direct-recording voting machines (DREs) ran for the Representative position in Congressional District 1. He trailed by a few votes more than the 0.5% margin that would have necessitated a mandatory recount. Of course, recounts in paperless DREs are relatively meaningless if the results are already incorrect.

    * In California, where each precinct typically had only one DRE, for disadvantaged voters who preferred not to vote on paper. In Santa Clara County, 57 of those DREs were reported to be inoperable! (Secretary of State Debra Bowen was a pioneer in commissioning a 2007 summer study on the risks inherent in existing computer systems -- see Matt Bishop and David Wagner, Risks of E-Voting, Inside Risks, Comm. ACM 50, 11, November 2007, and the detailed reports. ) There were also various reports in other states of paperless DREs that were inoperable including some in which more than half of the machines could not be initialized. In Maryland and Virginia, there were reports of voters having to wait up to five hours.

    Every Vote Should Count

    * Close Senate races in Minnesota and Alaska required ongoing auditing and recounting, particularly as more uncounted votes were discovered. Numerous potential undervotes also required manual reconsideration for the intent of the voter. Anomalies are evidently commonplace, but must be resolvable -- and, in close elections, resolved.

    Deceptive Practices

    * The George Mason University email system was hacked, resulting in the sending of misleading messages regarding student voting. (See two reports dated October 20, 2008: E-Deceptive Campaign Practices, Electronic Privacy Information Center and the Century Foundation, and Deceptive Practices 2.0: Legal and Policy Responses Common Cause, The Lawyers Committee for Civil Rights under Law, and the Century Foundation.) Numerous misleading phone calls, websites, and email messages have been reported, including those that suggested Democrats were instructed to vote on Wednesday instead of Tuesday to minimize poll congestion.


    * The needs for transparency, oversight, and meaningful audit trails in the voting process are still paramount. Problems are very diverse.

    * Despite efforts to add voter-verified paper trails to paperless direct-recording voting machines, some states still used unauditable systems for which meaningful recounts are essentially impossible. The electronic systems are evidently also difficult to manage and operate.

    * Systematic disenfranchisement continues. Although there seems to have been very little voter fraud, achieving accuracy and fairness in the registration process is essential. To vary an old adage, It's not just the votes that count, it's the votes that don't count.

    Overall, much work remains to be done to provide greater integrity throughout the entire election process.

    [CACM is now available online, including this column.


    Inside Risks 216, CACM 51, 6, June 2008

    Risks of Neglecting Infrastructure

    Jim Horning and PGN

    "The Foresight Saga" (Inside Risks, Comm. ACM 49, 9, September 2006) discussed failures in critical infrastructures due to lack of foresight in backup and recovery facilities. This column considers some of the causes and effects of another common kind of missing foresight: inadequate infrastructure maintenance.

    Civilization and infrastructure are intimately intertwined. Rising civilizations build and benefit from their infrastructures in a ``virtuous cycle.'' As civilizations decline, their infrastructures decay -- although unmaintained vestiges, such as roads and aqueducts, may outlive them.

    Dependence on critical infrastructures is increasing world-wide. This is true not only of information systems and network services, but also of energy, water, sanitation, transportation, and others that we rely on for our livelihoods and well-being. These critical infrastructures are becoming more interrelated, and most of them are becoming heavily dependent on information infrastructures. People demand ever more and better services, but understand ever less about what it takes to provide those services. Higher expectations for services are often not reflected in higher standards for infrastructure elements.

    Engineers know that physical infrastructures decay without regular maintenance, and prepare for aging (e.g., corrosion and erosion) that requires inspections and repairs. Proper maintenance is generally the cheapest form of insurance against failures. However, it has a definite present cost that must be balanced against the unknown future cost of possible failures. Many costly infrastructure failures could have been prevented by timely maintenance. American engineers have been warning about under-investment in infrastructure maintenance for at least a quarter-century (e.g., America in Ruins: The Decaying Infrastructure, Pat Choat and Susan Walker, 1983), but the problem is not limited to the United States.

    Neglect is the inertially easy path; proactive planning requires more immediate effort, resources, and funding. Creating maintainable systems is difficult and requires significant foresight, appropriate budgets, and skilled individuals. But investments in maintainability can reap enormous long-term benefits, through robustness to attack, simplified maintenance, ease of use, and adaptability to new needs.

    Although computer software does not rust, it is subject to incompatibilities and failures caused by evolving requirements, changing environments, changes in underlying hardware and software, changing user practices, and malicious exploitation of discovered vulnerabilities. Therefore, it requires maintenance. Yet the costs of maintenance are often ignored in the planning, design, construction, and operation of critical systems. Incremental upgrades to software are error-prone. Patchwork fixes (especially repeated patches) further detract from maintainability. Software engineers receive little training in preparing for software aging, in supporting legacy software, or in knowing when and how to terminate decrepit legacy systems.

    Insecure networked computers provide vandals easy access to the Internet, where spam, denial-of-service attacks, and botnet acquisition and control constitute an increasing fraction of all traffic. They directly threaten the viability of one of our most critical modern infrastructures, and indirectly threaten all the infrastructures connected to it. ``It is clear that much greater effort is needed to improve the security and robustness of our computer systems. Although many technological advances are emerging in the research community, those that relate to critical systems seem to be of less interest to the commercial development community.'' (Risks in Retrospect Comm. ACM 43, 7, July 2000)

    As the example of New Orleans after Hurricane Katrina shows, failure of a critical infrastructure (the levees) can cascade into others. The very synergies among infrastructures that allow progress to accelerate are a source of positive (amplifying) feedback, allowing initial failures to escalate into much larger long-term problems involving many different infrastructures. Ironically, such ``positive'' feedback often has negative consequences.

    Katrina should also remind us that remediating after a collapse often involves many secondary costs that were not foreseen. The more different infrastructures that fail concurrently, the more difficult it becomes to restore service in any of them. Restoring a lost ``ecosystem'' costs much more than the sum of the costs of restoring each element separately.

    Chronic neglect of infrastructure maintenance is not a simple problem, and does not have a simple solution. Technical, economic, social, and political factors intertwine; adequate solutions must involve both the public and private sectors. People who use these infrastructures need to appreciate the importance of maintaining them. People who understand sources of the fragilities, vulnerabilities, and decay in our critical infrastructures have a responsibility to educate decision makers and the public about these risks.

    Jim Horning ( is Chief Scientist of SPARTA's Information Systems Security Operation; see his blog at Peter Neumann moderates the ACM Risks Forum (


    Inside Risks 215, CACM 51, 5, May 2008

    The Physical World and the Real World

    Steven M. Bellovin, May 2008

    Most of us rely on the Internet, for news, entertainment, research, communication with our families, friends, and colleagues, and more. What if it went away?

    Precisely that happened to many people in early February, in the wake of the failure of several undersea cables. According to some reports, more than 80 million users were affected by the outages, Both the failure and the recovery have lessons to teach us.

    The first lesson, of course, is that failures happen. In fact, multiple failures can happen. Simply having some redundancy may not be sufficient; one needs to have enough redundancy, of the right types. In this case, geography and politics made life tougher.

    The geographical issue is clear from looking at a map: there aren't many good choices for an all-water route between Europe and the Persian Gulf or India. And despite this series of events, cables are generally thought to be safer on the seabed than on land. (A standing joke in the network operator community holds that you should bring a length of fiber optic cable with you when going hiking in the wilderness. If you get lost, throw it on the ground. A backhoe will soon show up to sever it; ask the driver how to get home.)

    The obvious answer is to run some backup cables on land, bypassing the chokepoint of the Red Sea. Again, a glance at the map shows how few choices there are. Bypassing the Red Sea on the west would require routing through very unstable countries. An eastern bypass would require cooperation from mutually hostile countries. Neither choice is attractive.

    From this perspective, it doesn't matter much just why the cables failed. Cables can be cut by ship anchors, fishing trawlers, earthquakes, hostile action, even shark bites. Regardless of the cause, when so many cables are in such a small area, the failure modes are no longer independent.

    For this problem, there are no good solution. Anyone whose business depends on connectivity through this region must take this into account.

    The dangers aren't only physical. The last few months have also shown that a 1999 National Research Council report was quite correct when it warned of the fragility of the routing system and the domain name system.

    In one highly-publicized incident, a routing mistake by a Pakistani ISP knocked Youtube off the air. There was a lot of speculation that this was deliberate --- the government of Pakistan had ordered Youtube banned within the country; might someone have tried to ``ban'' it globally? -- though later analysis strongly suggests that it was an innocent mistake. An outage affecting such a popular site is very noticeable; there was a great deal of press coverage. By contrast, when a Kenyan network was inadvertently hijacked by an American ISP, there was virtually no notice. Quieter, deliberate misrouting --- say, to eavesdrop on traffic to or from a small site --- might go completely unnoticed.

    The DNS-related incidents are scarier because they do reflect deliberate actions, with the force of the U.S. legal system behind them. In one case, the web site was briefly deleted from the DNS by court order, because a bank claimed the site contained stolen documents. (The site owners had apparently foreseen something like that, and had registered other names for the site in other countries: the .org registry is located in the U.S.) In a second incident, a U.S. government agency ordered the names of some non-U.S. sites removed from .com (again, located in the U.S.) because they violated the embargo against Cuba.

    What can we learn from these incidents? The moral is simple: the Internet is a lot more fragile than it appears. Most of the time, it works and works very well, without government interference, routing mistakes, or outages due to occasional fiber cuts. Sometimes, though, things go badly wrong. Prudence dictates that we plan for such instances.

    Steven M. Bellovin is a professor of computer science at Columbia University.


    Inside Risks 214, CACM 51, 4, April 2008

    A Current Affair

    Lauren Weinstein

    It's not a revelation that as a society we're often amiss when it comes to properly prioritizing technological issues. So it should be no surprise that one of the most significant upcoming changes in our physical infrastructure is getting little play not only in the mass media, but in technology-centric circles as well.

    There are increasing concerns that many persons in the U.S. are still unaware that virtually all over-the-air analog television signals are slated to cease in February of 2009 as part of the conversion to digital TV (although betting against a Congressionally-mandated extension at this time might be problematic). Yet it seems that almost nobody is talking about a vastly more far-reaching transition that is looming in our future just twelve years from now.

    Hopefully, you realize that I'm talking about the Congressionally ordered Development Initiative for Return to Edison Current Technology (DIRECT), and its core requirement for all public and private power grids in this country to be converted from AC to DC systems by 2020, with all new consumer and business devices using electricity to be capable of operating directly from these new DC power grids without transitional power conversion adapters by no later than 2030.

    OK, 2020 may still seem a long way off -- 2030 even more so. But for changes on such a scale, this is really very little time, and we'd better get cracking now or else we're likely to be seriously unprepared when the deadlines hit.

    It's really too late at this stage to re-argue whether or not switching from AC to DC makes sense technologically. Personally, I find the arguments for the conversion to be generally unconvincing and politically motivated.

    As you may recall from those purposely late night hearings on C-SPAN, significant aspects of the conversion have been predicated on anti-immigrant rhetoric. Many of those emotionally-loaded discussions focused on the supposed ``"national shame'' of our not using the ``rock-solid stable'' direct current power system championed by American hero Thomas Edison, and instead standardizing many years ago on an ``inherently unstable'' alternating current system, developed by an eccentric Croatian immigrant who enthusiastically proposed ideas characterized as grossly un-American -- such as free broadcast power.

    Similarly, it's easy to view the associated legislative language as largely a giveaway to the cryogenics industry, which of course stands to profit handsomely from the vast numbers of superconducting systems that will be necessary to create large practical DC grids.

    Conversion proponents pointed at existing long-distance DC transmission facilities, such as the Pacific DC Intertie, and the success of the conventional telephone system largely operating on DC current. But the Intertie is a highly specialized case, and even the phone system has relied on AC current for telephone ringing purposes.

    But this is all water over the spillway. There's big bucks to be made from this power transition. Stopping it now looks impossible. And admittedly, it's difficult to argue very convincingly against the ability to do away with device power supplies that are needed now to convert wall current AC to DC, or against the simplicity of DC current when powering future generations of LED bulbs that will presumably replace both incandescents and mercury-laden fluorescents.

    It's also true that much additional employment will be created, at least in the short term. Workers will be needed to install the new DC generating plants, distribution components, and power meters. Also, the many AC transformers hanging on poles and buried in vaults all over the country will need to be bypassed.

    Still, from a public policy standpoint, I'd by lying if I didn't state outright that, in my opinion, this entire affair is a risky fiasco, from political, economic, and even safety standpoints. For example, because Congress required that the same style wall sockets and plugs be retained for new DC devices as have long been used by existing AC products, we're sure to see RISKS horror stories galore about damaged equipment, and injured -- even killed -- consumers, when they run afoul of nasty power confusion accidents.

    Freewheeling AC/DC may be fine for a rock band, but it's no way to manage technology. While we can't unplug this coming mess, we should at least internalize it all as an object lesson in how special interests and jingoistic propaganda can distort technology in ways that are counterproductive, dangerous, and even, uh ... shocking.

    Lauren Weinstein ( is co-founder of People For Internet Responsibility He moderates the Privacy Forum


    Inside Risks 213, CACM 51, 3, March 2008

    Wireless Sensor Networks and the Risks of Vigilance

    Xiaoming Lu and George Ledin Jr

    When Wendell Phillips (an American abolitionist and reformer) told a Boston audience in 1852 that ``Eternal vigilance is the price of liberty'', he did not anticipate the advent of wireless sensor networks (WSNs).

    WSNs are a new technology that will help us be vigilant. Wireless networks and sensors are not new. However, deploying very large numbers of very small sensing devices (motes) is new.

    WSNs are distributed systems programmed for specific missions, to capture and relay specific data. For example, WSNs can check a vehicle's registered identity, location, and movements. Data recorded by sensors embedded in the vehicle can be cross-correlated with data recorded by sensors embedded in sidewalks and roads. With a vast WSN of this type available to them, authorities could monitor driving conditions and instantly recognize traffic problems. Drivers could benefit from such vigilance and the rapid response that it facilitates.

    The obvious downside in this example is a further erosion of our privacy. The cross-correlated data can be a bounty for law enforcement. If roads are seeded with sensors enforcing speed limits, we might expect to receive a ticket every time we exceed them. Authorities will benefit from such vigilance, too. There would be less need for patrolling highways or for pulling anyone over for speeding, because automatically generated fines could be issued to vehicle owners.

    Cars and roads are merely the tip of the iceberg for WSN applications. There are already commercially available sensor systems for habitat oversight, environmental surveys, building and structural integrity testing, spotting containers and other shipping cargo, border patrol support, fire and flooding alerts, and many other vigilance situations. Industry analysts predict that the market for WSNs will top $7 billion by 2010.

    Potential uses and benefits of WSNs are hard to gauge; so are the risks associated with their proliferation. Personal computers 30 years ago and cell phones 15 years ago can serve as templates for what we can reasonably expect. Today, motes are costly and big. Early PCs and mobile phones were heavy and expensive. Eventually sensors will be small enough to go unnoticed and inexpensive enough to be scattered almost anywhere.

    Power, storage, and communication range will be challenges for WSNs, just as they are for laptop computers and mobile phones. Security is also a serious concern. Power drain in sensors spawned many clever, cost-effective workarounds, skirting security difficulties. Synchronizing sleep and wake cycles maximizes battery life, but exposes sensors to attacks that can force sensors to sleep (stop working) or stay awake (waste energy).

    Sensors are more vulnerable to attack than PCs or cell phones. A standard off-the-shelf serial board can be used to change data via sensors' debugging interface. Data diddling could render a WSN unreliable. WSNs may give governments new tools to watch us, but hackers will relish new ways to spam and phish.

    Revenue-producing WSNs such as those monitoring traffic must be maintained, periodically tested, and upgraded. Maintaining WSNs deployed in rough terrains or hazardous conditions may not be possible. Motes may have to operate unattended, and those without power may remain unreplaced. Abandoned motes will be opportunities for new forms of data theft. Recovering dead motes to prevent staggering pollution problems will require ''sensors sensing sensors'' -- with as yet unknown techniques.

    Although power and security problems are not yet solved, it is prudent to begin examining the risks that would be posed by widespread deployment of WSNs.

    As with all advanced technologies, WSNs compel us to balance what's helpful (enhanced ability to observe) with what's harmful (abuse of this ability or undue expectation of its reliability). Performance of large and complex WSNs may be affected by a few malfunctioning sensors, which might be difficult to discover.

    The risks of deployment must be compared with the risks of non-deployment. For some locations, the cost-benefit analysis may be simple and decisive. WSNs will appear wherever it makes economic sense to deploy them, or when strong political goals justify their deployment. Anti-terrorism efforts will add round-the-clock attention to our already-well-documented lives, the ultimate reality show.

    Phillips warned us that if we wanted to be free, we had to be vigilant. He could not imagine we would risk trading freedom for vigilance; with WSNs, it can happen surreptitiously.

    Xiaoming Lu ( is a Ph.D. candidate at the University of California, Davis. George Ledin Jr ( is Professor of Computer Science, Sonoma State University.


    Inside Risks 212, CACM 51, 2, February 2008

    Software Transparency and Purity

    Pascal Meunier

    Many software programs contain unadvertised functions that upset users when they discover them. These functions are not bugs, but rather operations intended by their designers to be hidden from end-users. The problem is not new -- Trojan horses and Easter Eggs were among the earliest instances -- but it is increasingly common and a source of many risks. I define software transparency as a condition that all functions of software are disclosed to users. Transparency is necessary for proper risk management. The term ``transparency'' should be used instead of ``fully-disclosed" to avoid confusion with the ``full disclosure" of vulnerabilities.

    There is a higher standard to be named, because disclosure doesn't by itself remove objectionable functions. They pose risks while being irrelevant to the software's stated purpose and utility, and are foreign to its advertised nature. Freedom from such functions is a property that needs a name: loyalty, nonperfidiousness, fidelity, and purity come to mind, but none of them seems exactly right. For the purposes of this column, I shall call it purity. ``Pure Software" can theoretically exist without disclosure, but disclosure would be a strong incentive, as previously discussed by Garfinkel [1]. Purity does not mean free of errors or unchanged since release. It's possible for pure software to contain errors or to be corrupted. The following examples illustrate some of the risks from opaque and impure software.

    In 2004, the digital video recording (DVR) equipment maker TiVo was able to tell how many people had paused and rewound to watch Janet Jackson's wardrobe malfunction in the televised Super Bowl [2]. People could opt out of the data collection by making a phone call. The privacy policy, if it was read, did mention some data collection, but did not disclose its full extent and surprising detail. Very few would likely have opted-in to allow this foreign function.

    Software purity as a desirable property is highlighted by some of the differences between the GNU Public License (GPL) v2 and v3 [3]. The changes can be viewed as intended to protect the capability to remove unwanted functionality from software, including firmware based on GPL code (e.g., TiVo).

    In 2005, the anti-cheating Warden software that was installed with the World of Warcraft online game was found to snoop inside computers [4]. Some people love knowing it is there, whereas others find it distasteful but are unable to make a convincing argument that it is malicious spyware. Despite being authorized by the End-User License Agreement (EULA), it poses risks that were not made clear, through undisclosed, objectionable behaviors.

    Also in 2005, copy prevention software unexpectedly present on Sony BMG CDs was installed surreptitiously when users attempted to play a CD on their computer. It was later recognized as a rootkit [5]. Ironically, it was reused to attack the Warden [6].

    In 2007, people who had paid for Major League Baseball videos from previous years found that they were unable to watch them anymore because of a broken Digital Rights Management (DRM) system, because the server providing authorization was decommissioned without warning [7]. Fragile DRM systems, such as those requiring an available server, are undesirable because of the risks they present while being foreign to the advertised features or content.

    Also in 2007, Microsoft Live OneCare surreptitiously changed user settings when installed to enable automatic updates and re-enable Windows services that were disabled on purpose; this is documented obscurely [8]. Whereas it was not malicious, it caused many problems to users and system administrators and was vehemently protested. Surreptitious functions pose risks, even if well-intentioned.

    In conclusion, software transparency and purity are often valued but not explicitly identified. Beyond the obvious information security risks to users, opaque or impure software also poses business risks in the form of loss of reputation, trust, goodwill, sales, and contracts. It may be that transparency alone is enough for some purposes, and others may also require software purity. An explicit requirement of whichever is appropriate would decrease risks.

    Pascal Meunier ( is a research scientist at Purdue University CERIAS. His teaching and research include computer security and information assurance.

    1. Simson Garfinkel, The Pure Software Act of 2006, April 2004.

    2. Ben Charny, TiVo watchers uneasy after post-Super Bowl reports, February 2004.

    3. Pamela Jones, The GPLv2-GPL3 Chart, January 2006.

    4. Greg Hoglund, 4.5 million copies of EULA-compliant spyware, October 2005.

    5. Jason Schultz and Corynne McSherry, Are You Infected with Sony-BMG's Rootkit? EFF Confirms Secret Software on 19 CDs, November 2005.

    6. Robert Lemos, World of Warcraft hackers using Sony BMG rootkit, November 2005.

    7. Allan Wood, If You Purchased MLB Game Downloads Before 2006, Your Discs/Files Are Now Useless; MLB Has Stolen Your $$$ And Claims ``No Refunds", November 2007.

    8. Scott Dunn, PC rebooting? The cause may be MS OneCare, October 2007.


    Inside Risks 211, CACM 51, 1, January 2008

    The Psychology of Risks

    Dr. Leonard S. Zegans

    Personal risk taking is a major public-health problem in our society. It includes criminal behavior, drug addiction, compulsive gambling, accident-prone behavior, suicide attempts, and disease-promoting activities. The costs in human life, suffering, financial burden, and lost work are enormous.

    Some of the insights from the psychology of personal risks seem applicable to computer-related risks, and are considered here. This column is thus an orthogonal view of the past columns in this space -- which have focused largely on technological problems.

    The Greeks had a word for self-defeating risk taking -- Akrasia, which referred to incontinent behaviors that an individual performs against his or her own best interests. Clearly, there are some risks that are well considered with personal and social values. The issue that philosophers and psychologists have puzzled over has been why a person would persist in taking harmful, often impulsive risks. This question is seriously compounded when generalized to include people who are using computer systems.

    Personal risk-taking behavior can arise from biological, psychological, and social causes. Computer-related risks also involve psychological and social causes -- but also economical, political, institutional, and educational causes. To understand such behavior, it must be analyzed in terms of how individuals, institutions, and the social environment perceive it and what other less maladaptive options are available. What seems critical in assessing any such behavior is whether any control can be exerted over it, and who or what people and institutions might be aware of its consequences and able to act appropriately. Here are just a few manifestations that result from increased dependence on information technology.

    Loss of a sense of community. Online easy availability of excerpts from music, books, news, and other media may lead to fewer incentives for in-person gatherings, an impersonal lack of face-to-face contact, a lessening of thoughtful feedback, and a loss of the joy of browsing among tangible entities -- with many social consequences. It may also tend to dumb down our intellects.

    Acceleration. Instantaneous access and short-latency turn-around times as in e-mail and instant messaging might seem to allow more time for rumination. However, the expectation of equally instantaneous responses seems to diminish the creative process and escalate the perceived needs for responses. It also seems to lead to less interest in clarity, grammar, and spelling.

    Temptation Believing that one is unobserved, anonymous, or not accountable may lead to all sorts of risks -- such as clicking on untrustworthy URLs, opening up riskful attachments, and being susceptible to phishing attacks, scams, malware, and blackmail -- especially when communicating with unknown people or systems. This can lead to maladaptive consequences through bad judgment and inability to recognize consequences.

    Dissociation. Irrational risk behavior may arise due to problems of a modular-cognitive separation. Such behaviors are not unconsciously motivated, yet individuals and institutions are unable to connect the expression of a particular behavioral pattern with its detrimental effects. The extent to which foreseeable computer-related risks are ignored by system developers, operators, and users is quite remarkable from a psychological point of view.

    Our society often mythologizes artists, explorers, and scientists who take self-destructive risks as heroes who have enriched society. Often people (particularly the young) get a complicated and mixed message concerning the social value of personal risk taking. With respect to computer-related risks, our society tends to mythologize the infallibility of computer technology and the people who develop it, or alternatively, to shoot the messenger when things go wrong rather than remediating the underlying problems.

    The big difference seems to be this: In their personal lives, people tend to consciously and deliberately take risks -- though often unaware of possibly serious consequences. When dealing with computer technology, they tend to take risks unconsciously and in many cases unwillingly. (On the other hand, readers of this column space are likely to be much more wary.)

    In dealing with personal and computer-related risks, vigorous, compelling, and cognitively clear educational programs are vital in modulating unhealthy behavior and endorsing new attempts to deal with changing environments.

    Dr. Leonard Zegans is a psychiatrist and professor in the University of California at San Francisco Medical School. See his book chapter, Risk-Taking and Decision Making, in Self-Regulatory Behavior and Risk Taking Behavior, L. Lipsett and L. Mitnick, eds., Ablex, Norwood, N.J. 257-272, 1991.