NOTE: Reuse for commercial purposes is subject to CACM and author copyright policy.

Earlier Inside Risks columns can be found at http://www.csl.sri.com/neumann/insiderisks.html. 2006 columns are directly accessible at http://www.csl.sri.com/neumann/insiderisks06.html.

Inside Risks Columns, 2005

  • Wikipedia Risks, Peter Denning, Jim Horning, David Parnas, and Lauren Weinstein, December 2005
  • The Real National-Security Needs for VoIP, Steven Bellovin, Matt Blaze, and Susan Landau, November 2005
  • The Best-Laid Plans: A Cautionary Tale for Developers, Lauren Weinstein, October 2005
  • Risks of Technology-Oblivious Policy, Barbara Simons and Jim Horning, September 2005
  • Disability-Related Risks, PGN and Michael D. Byrne, August 2005
  • DRM and Public Policy, Ed Felten, July 2005
  • What Lessons Are We Teaching? Susan Landau, June 2005
  • Risks of Third-Party Data, Bruce Schneier, May 2005
  • Two-Factor Authentication: Too Little, Too Late, Bruce Schneier, April 2005
  • Anticipating Disasters, Peter G. Neumann, March 2005
  • Responsibilities of Technologists, Peter G. Neumann, February 2005
  • Not Teaching Viruses and Worms Is Harmful, George Ledin Jr, January 2005
  • If you wish to see earlier Inside Risks columns, those through December 2003 are at http://www.csl.sri.com/neumann/insiderisks.html. Columns for 2004 are at http://www.csl.sri.com/neumann/insiderisks04.html.

    ========================================================

    Inside Risks 186, CACM 48, 12, December 2005

    Wikipedia Risks

    Peter Denning, Jim Horning, David Parnas, and Lauren Weinstein

    The Wikipedia (WP, http://en.wikipedia.org/wiki/Wikipedia") applies the wiki technology (from a Hawaiian word for ``quick'') to the encyclopedia, a venerable form of knowledge organization and dissemination. Wikipedia provides a fast and flexible way for anyone to create and edit encyclopedia articles without the delay and intervention of a formal editor or review process.

    The WP's over 750,000 articles are written and edited by volunteers. WP founder Jimmy Wales believes WP's free, open, and largely unregulated process will evolve toward an Enclopedia Britannica or better quality. But will this process actually yield a reliable, authoritative reference encompassing the entire range of human knowledge?

    Opinions are mixed. WP claims to be the most popular reference site on the Internet. It has been hailed as the quintessence of the ``wisdom of crowds'' (an allusion to James Surowiecki's 2004 book of the same title), as a model of democratized information, and as a nail in the coffin of the ``stodgy old commercial encyclopedia''.

    Others are concerned about the reliability of an uncontrolled reference work that may include any number of purposeful or accidental inaccuracies. Some observers wonder why anyone would accept information from anonymous strangers of unknown qualifications. WP's first editor in chief, Larry Sanger, believes that an anti-expertise bias among ``Wikipedians'' foreshadows the death of accuracy in scholarship. (``Why Wikipedia Must Jettison its Anti-Elitism,'' Dec. 31, 2004, http://www.kuro5hin.org/story/2004/12/30/142458/25). Robert McHenry, former editor of Encyclopedia Britannica, is even more blunt in asserting that the community-accretion process of Wikipedia is fundamentally incapable of rising to a high standard of excellence (``The Faith based encyclopedia'', http://www.techcentralstation.com/111504A.html).

    Regardless of which side you're on, relying on Wikipedia presents numerous risks:

    * Accuracy: You cannot be sure which information is accurate and which is not. Misinformation has a negative value; even if you get it for free, you've paid too much.

    * Motives: You cannot know the motives of the contributors to an article. They may be altruists, political or commercial opportunists, practical jokers, or even vandals (WP: ``Wikipedia:Most_vandalized_pages'').

    * Uncertain Expertise: Some contributors exceed their expertise and supply speculations, rumors, hearsay, or incorrect information. It is difficult to determine how qualified an article's contributors are; the revision histories often identify them by pseudonyms, making it hard to check credentials and sources.

    * Volatility: Contributions and corrections may be negated by future contributors. One of the co-authors of this column found it disconcerting that he had the power to independently alter the Wikipedia article about himself and negate the others' opinions. Volatility creates a conundrum for citations: Should you cite the version of the article that you read (meaning that those who follow your link may miss corrections and other improvements), or the latest version (which may differ significantly from the article you saw)?

    * Coverage: Voluntary contributions largely represent the interests and knowledge of a self-selected set of contributors. They are not part of a careful plan to organize human knowledge. Topics that interest the young and Internet-savvy are well-covered, while events that happened ``before the Web'' may be covered inadequately or inaccurately, if at all. More is written about current news than about historical knowledge.

    * Sources: Many articles do not cite independent sources. Few articles contain citations to works not digitized and stored in the open Internet.

    The foregoing effects can pollute enough information to undermine trust in the work as a whole. The WP organizers are aware of some of these risks, acknowledging that ``Wikipedia contains no formal peer review process for fact-checking, and the editors themselves may not be well-versed in the topics they write about.'' The organizers have established a background editorial process to mitigate some of the risks. Still, no one stands formally behind the authenticity and accuracy of any information in WP. There is no mechanism for subject-matter authorities to review and vouch for articles. There are no processes to ferret out little-known facts and include them, or to ensure that the full range of human knowledge, past and present, is represented.

    The Wikipedia is an interesting social experiment in knowledge compilation and codification. However, it cannot attain the status of a true encyclopedia without more formal content inclusion and expert review procedures.

    The authors are members of the ACM Committee on Computers and Public Policy.

    ========================================================

    Inside Risks 185, CACM 48, 11, November 2005

    The Real National-Security Needs for VoIP

    Steven Bellovin, Matt Blaze, and Susan Landau

    In August 2005 the Federal Communications Commission announced that the Communications Assistance for Law Enforcement Act (CALEA) applies to broadband Internet access and ``interconnected Voice over IP'' (VoIP). VoIP providers already had to comply with legally-authorized wiretap orders; the FCC ruling means that all VoIP implementations would now have to pass federal wiretapping standards before they could be deployed. This is not a merely a hair-splitting distinction of concern only to telephone companies; in essence, this new ruling places the FBI in the middle of the design process for VoIP protocols and products.

    Anyone who thinks that the new FCC ruling will affect only the U.S. is quite mistaken. After CALEA (which requires that digitally-switched telephone networks be built wiretap-enabled) became law in 1994, the FBI pressed other nations to adopt similar legislation. In any case, digital-switching technology sold in the U.S. must comply with CALEA, thus inevitably forcing the rest of the world to also adopt CALEA-compliant switching technology.

    There were objections to the ruling from many quarters: civil-liberties organizations, Internet providers, and the computer industry. Although CALEA applies to services that provide a ``replacement for a substantial portion of the local telephone exchange service,'' there is currently a clear exemption for the Internet. It is likely that the FCC ruling will be challenged in court. If, as some expect, the FCC ruling is overturned, the FBI is likely to seek Congress's help in expanding CALEA to include VoIP.

    CALEA applied to VoIP might simplify the FBI's efforts to conduct legally-authorized wiretaps (although the FBI has not disclosed any instances in which it has had difficulty conducting VoIP wiretaps). However, applying CALEA to VoIP would necessitate introducing surveillance capabilities deep into the network protocol stack. The IETF considered such a surveillance protocol five years ago in RFC 2804, and concluded that it simply could not be done securely. Networks have become even more fragile since then.

    Over the last decade, the Internet has proven irresistible to business; it and private networks using Internet protocols are now used to control much of the world's critical infrastructures: oil pipelines, electric-power grids, etc. The vulnerabilities inherent in the Internet put vital assets at risk. In the wake of September 11th and the Madrid and London bombings, protection of such infrastructure has taken on a new urgency. Introducing surveillance capabilities into Internet protocols is simply dangerous, the fundamental problem being that designing and building secure surveillance systems is too difficult.

    It might be argued that the surveillance technology can be built securely and without risk of penetration by hostile forces. The track record is not encouraging. Even those companies that might be expected to be in an excellent position to prevent penetration have found themselves vulnerable. A number of U.S. Government agencies, including the Defense Department and the Department of Justice, have been the victim of successful attacks.

    It is possible to write better software, even with the limited state of the current art, but the processes still aren't fool-proof. For example, avionics software (which is held to a very high standard and is not expected to deal with Internet attacks) is not immune from critical flaws.

    With CALEA, incentives work against security. VoIP companies are unlikely to pay for high-assurance development; they don't rely on the proper function of wiretapping software in their normal operations. The software won't be available to many friendly eyes that might report bugs and holes. Instead, the likely targets of wiretaps---organized crime and foreign and industrial spies who would want to subvert the monitoring capabilities for their own ends--- would most certainly would not disclose any holes that they find.

    Given this, how likely is it that ISPs will be able to secure their surveillance and remote monitoring capabilities from attack and takeover by hostile agents? Not imposing CALEA on VoIP does not mean that law enforcement will be helpless to wiretap VoIP. Instead it means that wiretapping will be accomplished at either the application layer (by the VoIP provider) or the link layer (by monitoring the target's network connection), rather than from functions embedded more pervasively across the network stack. In the debate over cryptography policy, several nations (including the U.S. and France) wisely concluded a decade ago that weakening Internet security in the hope of occasionally helping law enforcement was a bad tradeoff. Extending CALEA to VoIP would be a dangerous step backward.

    Steven M. Bellovin is a professor of computer science at Columbia University. Matt Blaze is an associate professor of computer and information science at the University of Pennsylvania. Susan Landau is a Distinguished Engineer at Sun Microsystems.

    [EDITOR's NOTE: The initial "Not" in "Not imposing CALEA on VoIP does not mean that law enforcement will be helpless to wiretap VoIP." was missing in the CACM published version, an authors' oversight that was not caught in galleys. We regret any confusion this may have caused.

    While I am at it, I want to thank Tom Lambert at ACM who for many years has done a superb job with all of the contributions to the Inside Risks space. Occasionally he has to make a change in the galleys to make the column fit into the limited space on the printed page. Although I try to track such changes, this may sometimes result in my online version not being identical to the CACM printed version. If there is any question, consult the ACM digital library. PGN]

    ========================================================

    Inside Risks 184, CACM 48, 10, October 2005

    The Best-Laid Plans: A Cautionary Tale for Developers

    Lauren Weinstein

    Once upon a time, Sony Computer Entertainment planned a new handheld device for gaming, music, and movies. Their powerful new ``PSP'' (PlayStation Portable) is based on the MIPS R4000 CPU, with elaborate graphics capabilities, a gorgeous color LCD display, USB and WiFi interfaces, and a special ``UMD'' read-only optical disc system.

    Sony's engineers didn't shortcut on security. They employed elaborate copy protection and anti-piracy systems, including a digitally-signed software authentication environment and hardware-based AES encryption, all aimed at preventing the use of unofficial or pirated materials. In theory, only Sony could ``sign'' software so that it would run on the PSP.

    Despite these efforts, cracks appeared almost immediately in the PSP's armor, spread via the Internet like wildfire, and triggered an amazing and continuing global PSP hacking effort.

    It was quickly discovered that the early PSP units, released only in Japan, contained a firmware flaw allowing the running of specially manipulated ``unsigned'' code. Immediately, ``homebrew'' PSP applications (that is, not authorized by Sony) appeared, along with work on GNU-based compiler toolchains for development.

    By the time the PSP was released in the U.S. some months later, the flawed version 1.0 PSP firmware had been replaced with version 1.5, and the execution hole appeared to have been closed.

    The next fissure arrived quickly. U.S. PSP fans discovered that a Web browser included in a popular PSP game for update purposes, designed to access only a particular update site, could be manipulated to reach arbitrary sites via Internet DNS tricks. The browser also allowed access to local system files on the PSP itself, including files on the UMD game disks. While many of these files were encrypted, enough information was found to speed development efforts aimed at cracking the 1.5 firmware for unsigned executions.

    It wasn't long before that work bore fruit. A group in Spain first presented a way to launch homebrew applications on the widely available 1.5 firmware units via the rapid swapping of memory sticks -- impractical and possibly dangerous to the hardware, but it worked. A few days later, a technique that eliminated the physical swapping was released by the same group, and of course spread nearly instantly via the Net.

    You can already recognize a familiar pattern from the early days of crypto systems. It wasn't even necessary to try cracking the actual encryption itself (in this case a formidable task, to say the least), because implementation flaws provided other backdoors into the system.

    Ironically, by trying to maintain tight control over what could run on the PSP, Sony may have damaged its own best interests. People hacking the PSP fell mostly into two categories -- those who wanted to run homebrew applications, and those who wished to run pirated PSP games (or games from other platforms, some pirated and some not, via emulators).

    Homebrew developers are a particularly creative and tenacious lot. Apart from all sorts of their newly developed tools and homemade games, even Windows 95 and versions of Linux have preliminarily been run on the PSP via an x86 emulator. Unfortunately for Sony, the same hacking necessary to allow homebrew opened the door for developments allowing the launching of increasing numbers of pirated official PSP games -- presumably a worst case scenario from Sony's standpoint.

    If Sony had encouraged -- or at least officially permitted -- the running of homebrew on the PSP from the beginning, it's possible that efforts that led to piracy might have been significantly slowed.

    The PSP ``arms race'' continues. Sony has released new 2.0 series firmware (with attractive features, such as an integral Web browser) that is now standard on newer PSP shipments, and has once again closed the known execution holes. Newer official PSP games will attempt to require updating to at least this firmware version. But this creates another irony: people with 1.5 firmware units who want to keep using PSP homebrew won't be able to legitimately run those new games even if they want to, and may well be drawn to pirated versions as a result.

    Whether the newer PSP firmware releases will be cracked without resorting to hardware modifications is unclear. But with a globally dispersed cadre of PSP hackers hard at work, and the Internet providing immediate coordination and distribution of their efforts, betting against another crack may not be the best game in town.

    Lauren Weinstein (lauren@pfir.org) is co-founder of People For Internet Responsibility http://www.pfir.org. He moderates the Privacy Forum http://www.vortex.com/privacy.

    ========================================================

    Inside Risks 183, CACM 48, 9, September 2005

    Risks of Technology-Oblivious Policy

    Barbara Simons and Jim Horning

    Many readers of this column have tried to influence technology policy and had their advice ignored. Politics is frequently a factor, but another reason for our failure is that we donšt do a good job of explaining the roots of computing-related security and usability issues to non-technical people.

    People who have never written code do not understand how difficult it is to avoid and/or find bugs in software. Therefore, they don't understand why software patches are so dangerous. They have a hard time believing that it's possible to conceal malicious code in large programs or insert malware via a software patch. They don't see why it is so difficult even to detect malicious code in software, let alone locate it.

    The Digital Millennium Copyright Act (DMCA), which became US law in 1998, is illustrative. The most controversial portions of the DMCA, the anti-circumvention and anti-dissemination provisions, did not come into effect until 2000. It was only by chance that we learned why the delay occurred. (Stop reading, and see if you can guess why).

    The delay was included because lawmakers believed that aspects of the DMCA might criminalize work on securing software against Y2K problems. Y2K was not the only software security issue that would require this kind of code analysis, but Congress didn't know or didn't care. Computer security experts and cyberlaw professors had not been quiet about the risks of the DMCA. There were several letters, including one signed by a large number of experts in computer security and encryption http://www.cerias.purdue.edu/homes/spaf/WIPO/index.html, warning that the anti-circumvention provisions could criminalize some standard computer security techniques. Our warnings were ignored.

    One consequence of the poorly-drafted DMCA is that the anti-circumvention provisions are now preventing independent experts from inspecting and testing voting machine software to check for bugs or malware. Who would have thought that a law pushed by Hollywood would be used to protect the insecure and secret software deployed in voting machines?

    When the computing community started warning about the risks of current paperless electronic voting machines, we encountered outright hostility from some election officials and policy makers. We were accused of being ``fear mongers'' and Luddites. On Election Day 2004 a lobbyist for voting machine vendors claimed that ``Electronic voting machine issues that have been cited are related to human error, process missteps or unsubstantiated reports.'' How would the lobbyist know? And why did anyone believe him, rather than the experts?

    To counter unrealistic claims about the safety or robustness of software, we need analogies that help people gain insight into the complexity of large programs. Analogy is a poor tool for reasoning, but a good analogy can be very effective in developing intuition.

    One possibly useful analogy is the US Tax Code. Americans have some sense of its complexity and the large number of people employed in its interpretation. Tax loopholes are analogous to hidden malicious code or Trojan horses in software.

    The tax code resembles software in other ways as well: It is intended to be precise and to interface with messy realities of the real world. It has been developed in multiple iterations, responding to changing circumstances and requirements. The people who wrote the original version are no longer around. No one understands it in its entirety. It can be hard to infer intent simply by reading a section. There are people who actively seek to subvert it.

    Of course, there are also major differences between the tax code and software. The tax code is relatively ``small''; although it runs to several thousand printed pages, Windows XP has 40 million lines of source code. The tax code is interpreted by people, which introduces both the possibility of common-sense intervention and the possibility of human error.

    We have failed to effectively explain the risks of inappropriate, careless, or poorly designed software to the general public, the press, and our policymakers. But good analogies can help us communicate. The issues are too critical for us to be shut out of the debate.

    Barbara Simons (simons@acm.org) is a former president of ACM. Jim Horning (horning@acm.org) is a Chief Scientist at SPARTA, Inc.

    ========================================================

    Inside Risks 182, CACM 48, 8, August 2005

    Disability-Related Risks

    PGN and Michael D. Byrne

    People with disabilities often experience difficulties that arise from their interactions with computer technology, above and beyond the usual risks. Their job performance, health, safety, financial stability, and general well-being may all be impaired --- for example, because of shortcomings in system interfaces and workplace conditions, human limitations, legal inequities, and other factors. As technologists, we need to be much more proactive in understanding the nature of the problems and potential approaches to improving the situation. Unfortunately, mainstream competitive commercial software developments tend to ignore many of the relevant problems and risks.

    Visual disabilities. Interfaces with flashy but functionally limited graphical applications and websites (e.g., exploiting the capabilities of Java, JavaScript, and Flash) create serious obstacles for people who are visually impaired. Examples include e-mail challenge-response schemes that require graphical recognition of images in order to receive the content, and websites that say ``click here if you cannot read this.'' Voice input and output, which have been improving lately, both have enormous potential --- especially if the mechanisms are compatible with conventional systems. Software that transforms displayed formats into audio can be particularly useful. Braille output is another option that can help some individuals, although this is far from a universal solution because only a small minority of people with visual impairments can read Braille. Specialized computer terminals are available that provide some help, but they are still very expensive.

    Auditory disabilities. Applications and websites with extensive use of audio have little value for the hearing impaired. Telephones can present a significant challenge. Although some devices exist to transform audio into a computer-readable form, their availability is not widespread.

    Physical disabilities. Carpal tunnel syndrome and repetitive strain injuries affect many computer users, with voice input as a possible alternative. Users with impaired mobility may sometimes benefit from remote access, although better security is needed to avoid integrity and privacy compromises. However, there is sometimes no substitute for personal contact.

    People with several disabilities. The above problems are compounded dramatically for people who have more than one disability. For example, loss of both vision and hearing presents huge obstacles, because many of the technological solutions designed to assist the visually impaired rely on auditory communication (e.g., text-to-speech). For example, some medical devices have removed visual displays in favor of only auditory warnings. Some paperless electronic medical records systems are engineered oblivious to disabilities. In order to help address such concerns, interoperability among different media for different disabilities would be a worthy goal. However, it would require significantly greater attention to interface standards and in some cases represents a formidable technical challenge.

    On the positive side, addressing the needs of special populations can result in interfaces that are easier for everyone to use. For example, extensive research on aging has shown that older adults increasingly have problems with high demands for short-term memory. Careful attention to reducing the working-memory load to help serve the needs of older populations can make interfaces more fluid for everyone. Also, attention to simplified interfaces in websites that have been redesigned to comply with the Americans with Disabilities Act has led to improvements for all users.

    Today's electronic voting systems provide an example of an application that must confront such problems head-on. Although there are some hardware/software attachments for the visually impaired, they all seem to have serious limitations regarding ease of use, security, voter privacy, assurance that ballot integrity is preserved, and protection against coercion and vote selling. Although some of the privacy concerns might be addressable via remote voting, other ongoing problems arise -- with both paper absentee ballots and Internet voting. These problems are widespread, but are perhaps especially complicated for people with disabilities.

    Issues surrounding the accessibility of technology raise public policy concerns as well. Many (and perhaps most) laws regarding accessibility were not written with a clear understanding of the technological issues involved; the ADA in some cases overspecifies overly narrow technology solutions. Similarly, many (and perhaps most) technological systems are constructed without careful consideration of the legal implications of the level of accessibility they achieve. As citizens, we need to be cognizant of the technological implications of the accessibility standards we pursue; similarly, as technologists, we have a responsibility to honor (or at least consider) such standards when we design and implement systems with broad public deployment.

    Michael Byrne (byrne@acm.org) is a professor in the Psychology Department at Rice University. PGN moderates the ACM Risks Forum (risks.org).

    ========================================================

    Inside Risks 181, CACM 48, 7, July 2005

    DRM and Public Policy

    Edward W. Felten

    Digital rights management (DRM) systems try to erect technological barriers to unauthorized use of digital data. DRM elicits strong emotions on all sides of the digital copyright debate. Copyright owners tend to see DRM as the last defense against rampant infringement, and as an enabler of new business models. Opponents tend to see DRM as robbing users of their fair use rights, and as the first step toward a digital lockdown. Often the DRM debate creates more heat than light.

    I propose six principles that should underlie sensible public policy regarding DRM. They were influenced strongly by discussions within USACM (ACM's U.S. Public Policy Committee), which is working on a DRM policy statement. These principles may seem obvious to some readers; but current U.S. public policy is inconsistent, in various ways, with all of them.

    Competition. Public policy should enable a variety of DRM approaches and systems to emerge, should allow and facilitate competition between them, and should encourage interoperability among them.

    In a market economy, competition balances the interests of producers and consumers. The market, and not government or industry cartels, should decide whether and how DRM will be used. Government should not short-circuit competition by mandating use of DRM, or by allowing industry groups to use DRM "standardization" as a pretext for reducing competition.

    Copyright Balance. Since lawful use, including fair use, of copyrighted works is in the public interest, a user wishing to make lawful use of copyrighted material should not be prevented from doing so by any DRM system. DRM systems should be seen as a tool for reinforcing existing legal constraints on behavior (arising from copyright law or by contract), not as a tool for creating new legal constraints.

    Traditionally, copyright has reflected careful public-policy choices that balance competing rights and interests. DRM should shore up this balance, not override it. DRM can reinforce legal rights, but it must neither create nor abridge them; that is the province of policymakers.

    Consumer protection. DRM should not be used to restrict the rights of consumers. Policymakers should actively monitor actual use of DRM and amend policies as necessary to protect these rights.

    In an ideal world, the first two policy principles would make this one unnecessary: competition and copyright balance would protect users' interests. But hard experience shows that imbalances of power often require regulation to protect consumers.

    Privacy. Public policy should ensure that DRM systems should collect, store, and redistribute private information about users only to the extent required for their proper operation, that they follow fair information practices, and that they are subject to informed consent by users.

    DRM must not become a platform for spying on users or for gathering records of what each user reads or watches. If a system must gather private information, for example to process payments, the information should be handled according to accepted privacy principles.

    Research and public discourse. DRM systems and policies should not interfere with legitimate research or with discourse about research results or other matters of public concern.  Laws concerning DRM should contain explicit exceptions to protect this principle.

    Laws designed to bolster DRM, such as the anti-circumvention provisions of the Digital Millennium Copyright Act (DMCA), must not hinder research or public discussion. Important public-policy debates, about topics such as e-voting, content filtering, and DRM itself, rely on accurate information about the design and efficacy of particular technologies. When the law blocks the discovery and dissemination of such information, public debate suffers. Existing law does not do enough to protect research and discussion, as my colleagues and I learned in 2001 when the recording industry tried to use DMCA threats to stop us from publishing a scholarly paper about DRM technology. The law should explicitly protect research and debate.

    Targeted policies. Policies meant to reinforce copyright should be limited to applications where copyright interests are actually at stake.

    Creative lawyers have tried to use the DMCA (a law intended to protect copyright owners) to block third-party remote controls from working with garage-door openers (Chamberlain v. Skylink) and third-party toner cartridges with printers (Lexmark v. Static Control). The courts wisely rejected these attempts, observing that there was no risk of copyright infringement. Similar limits should apply to other areas of DRM policy.

    Changing public policy can be difficult, especially on a topic as contentious as DRM. Agreeing on policy principles is a good way to start.

    Edward W. Felten is a professor of computer science and public affairs at Princeton University.

    ========================================================

    Inside Risks 180, CACM 48, 6, June 2005

    What Lessons Are We Teaching?

    Susan Landau

    Recently the New Jersey Institute of Technology's Homeland Security Technology Systems Center proposed ``smart'' cameras that would identify everyone entering school premises and send out an alert when an intruder is discovered. That follows on a similar action by a middle school in Phoenix Arizona, which in 2003 installed at its doors video cameras and face-scanning technology that linked to national databases of sex offenders and missing children. These are ambitious versions of proposals being discussed in many places. In a post-September 11th, post-Beslan world, closed-circuit television (CCTV) is the newest idea for public schools. CCTV in schools is not universally embraced, however. For example in Israel, where public-safety issues are paramount and security guards stand in front of discos, shopping malls, and restaurants, video cameras are not in routine use in schools.

    What dangers these cameras would protect against? The model proposed by the New Jersey Institute of Technology would not have prevented Columbine; the two students had every right to be on campus. Nor would video cameras have prevented Beslan, because the weapons that enabled the takeover were hidden while the Russian school was under construction. But on the other hand, such cameras probably would catch kids involved in inappropriate activities --- smoking, hanging out instead of being in class --- why not invest?

    For one thing, with a false positive rate of 1% --- ten false alarms every morning in a school with just a thousand students, teachers, and staff --- it is doubtful that facial-recognition systems would work. How long would videotapes be stored? Who would have access to them? What risks would this introduce? Can we really expect schools to adequately secure online files of student and staff records, records that, by necessity, must be Internet accessible?

    Video camera in schools introduce a different set of issues as well. Consider, for a moment, the role of public schools in society. ``[T]he individual who is to be educated is a social individual and ... society is an organic union of individuals,'' wrote John Dewey in 1897 in ``My Pedagogic Creed'' (The School Journal, Vol. LIV, No. 3, pp. 77-80). According to Dewey, whose theories of progressive education profoundly impacted public schools, ``The only true education comes through the stimulation of the child's powers by the demands of the social situations in which he finds himself. Through these demands he is stimulated to act as a member of a unity ... and to conceive of himself from the standpoint of the welfare of the group to which he belongs.'' The unspoken, but vital, role of public school is in creating a cohesive society. In a nation as diverse as the U.S. is, and that many others, including France, the U.K., and Holland are becoming, such socialization is critically important.

    Seen from that perspective, the lessons from CCTV in schools are quite disturbing. Video cameras in schools teach children that in a public space, eyes you can't see may be watching you. Video cameras in schools demonstrate to students that you don't have any privacy (get over it). Video cameras in schools show disrespect for freedom of speech and freedom of association. [Congress shall make no law ... abridging the freedom of speech, or the right of the people to peacefully assemble ... First Amendment to the U.S. Constitution.] And video cameras in hallways are one small step from video cameras in classrooms; what better way to stifle teachers' creativity and experimentation?

    Benjamin Spock, Penelope Leach, T. Berry Bazelton, and other experts in child behavior tell us that children learn not from the lessons we deliberately set out to teach, but by osmosis. Children learn not from what we say, but from what we do. In the end, that makes the choice about CCTV in schools quite simple. After all, when we teach 1984, what is the lesson we are hoping to convey to the students --- the one that comes from a critical reading of the text, or the one that comes from surveillance cameras monitoring students' and teachers' every move?

    Susan Landau is a senior staff engineer at Sun Microsystems and co-author, with Whitfield Diffie, of ``Privacy on the Line: The Politics of Wiretapping and Encryption'' (MIT Press), 1998. She attended New York City public schools.

    ========================================================

    Inside Risks 179, CACM 48, 5, May, 2005

    Risks of Third-Party Data

    Bruce Schneier

    Reports are coming in torrents. Criminals are known to have downloaded personal credit information of over 145,000 Americans from ChoicePoint's network. Hackers took over one of Lexis Nexis' databases, gaining access to personal files of 32,000 people. Bank of America Corp. lost computer data tapes that contained personal information on 1.2 million federal employees, including members of the U.S. Senate. A hacker downloaded the names, Social Security numbers, voicemail and SMS messages, and photos of 400 T-Mobile customers, and probably had access to all of their 16.3 million U.S. customers. In a separate incident, Paris Hilton's phone book and SMS messages were hacked and distributed on the Internet.

    The risks of third-party data -- personal data being held by others -- are twofold: the privacy risk and impersonation leading to fraud (popularly called "identity theft"). Identity theft is the fastest-growing crime in the U.S. A criminal collects enough personal data on someone to impersonate him to banks, credit card companies, and other financial institutions, then racks up debt in the person's name, collects the cash, and disappears. The victim is left holding the bag, often having to spend years clearing his name. Total losses in 2003: $53 billion.

    People have been told to be careful: not to give out personal financial information, to shred their trash, to be cautious when doing business online. But criminal tactics have evolved, and many of these precautions are useless. Why steal identities one at a time, when you can steal them by the tens of thousands?

    The problem is that security of much of our data is no longer under our control. This is new. A dozen years ago, if someone wanted to look through your mail, he had to break into your house. Now he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it's on a computer owned by a telephone company. Your financial accounts are on websites protected only by passwords; your credit history is stored -- and sold -- by companies you don't even know exist. Lists of books you buy, and the books you browse, are stored in the computers of online booksellers. Your affinity card allows your supermarket to know what foods you like. Others now control data that used to be under your direct control.

    We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of identity theft or privacy violations. The only reason we know about most of these incidents at all is a California law mandating public disclosure when certain personal information about California residents is leaked. (In fact, ChoicePoint arrived at its 145,000 figure because they didn't look back further than the California law mandated.)

    The effectiveness of the California law is based on public shaming. If companies suffer bad press for their lousy security, they'll spend money improving it. But it'll be security designed to protect their reputations from bad PR, not security designed to protect customer privacy. Even this will work only temporarily: as these incidents become more common, the public becomes inured, and the incentive to avoid shaming goes down.

    This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. The police need a warrant to read the e-mail on your computer, but they don't need one to read it off the backup tapes at your ISP. According to the Supreme Court, that's not a search as defined by the 4th Amendment.

    This isn't a technology problem; it's a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don't have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant -- even though it occurred at the phone company switching office -- the Supreme Court must recognize that reading e-mail at an ISP is no different.

    Bruce Schneier is the CTO of Counterpane Internet Security, Inc., and the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World. You can read more of his security writings on http://www.schneier.com .

    ========================================================

    Inside Risks 178, CACM 48, 4, April, 2005

    Two-Factor Authentication: Too Little, Too Late

    Bruce Schneier

    Two-factor authentication isn't our savior. It won't defend against phishing. It's not going to prevent identity theft. It's not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.

    The problem with passwords is that they're too easy to lose control of. People give them to other people. People write them down, and other people read them. People send them in e-mail, and that e-mail is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. They're also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can't be sure who is typing that password in.

    Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it's harder for someone else to intercept. You can't write down the ever-changing part. An intercepted password won't be good the next time it's needed. And a two-factor password is harder to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof.

    These tokens have been around for at least two decades, but it's only recently that they have gotten mass-market attention. AOL is rolling them out. Some banks are issuing them to customers, and even more are talking about doing it. It seems that corporations are finally waking up to the fact that passwords don't provide adequate security, and are hoping that two-factor authentication will fix their problems.

    Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.

    Here are two new active attacks we're starting to see:

    Man-in-the-Middle Attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank's real website. Done right, the user will never realize that he isn't at the bank's website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user's banking transactions while making his own transactions at the same time.

    Trojan attack. Attacker gets Trojan installed on user's computer. When user logs into his bank's website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

    See how two-factor authentication doesn't solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.

    The real threat is fraud due to impersonation, and the tactics of impersonation will change in response to the defenses. Two-factor authentication will force criminals to modify their tactics, that's all.

    Recently I've seen examples of two-factor authentication using two different communications paths: call it "two-channel authentication." One bank sends a challenge to the user's cell phone via SMS and expects a reply via SMS. If you assume that all your customers have cell phones, then this results in a two-factor authentication process without extra hardware. And even better, the second authentication piece goes over a different communications channel than the first; eavesdropping is much, much harder.

    But in this new world of active attacks, no one cares. An attacker using a man-in-the-middle attack is happy to have the user deal with the SMS portion of the log-in, since he can't do it himself. And a Trojan attacker doesn't care, because he's relying on the user to log in anyway.

    Two-factor authentication is not useless. It works for local log-in, and it works within some corporate networks. But it won't work for remote authentication over the Internet. I predict that banks and other financial institutions will spend millions outfitting their users with two-factor authentication tokens. Early adopters of this technology may very well experience a significant drop in fraud for a while as attackers move to easier targets, but in the end there will be a negligible drop in the amount of fraud and identity theft.

    Bruce Schneier is the CTO of Counterpane Internet Security, Inc., and the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World. You can read more of his security writings on http://www.schneier.com

    . ========================================================

    Inside Risks 177, CACM 48, 3, March, 2005

    Anticipating Disasters

    Peter G. Neumann

    As Henry Petroski noted over twenty years ago, we generally learn less from successes than from failures. The ACM Risks Forum documents failures and risks, and is a goldmine of opportunity for whoever wants to learn from mistakes. Recent additions include more computer system development fiascos (including the FBI Virtual Case File), a failed software updgrade that crippled an entire British government department, three train derailments, massive flight cancellations and undelivered baggage, an air-traffic control outage caused by a rodent, a software problem that prevented launch of a missile interceptor test, recall of a seriously flawed defibrillator, the discovery of 14,000 radiology reports remaining undelivered in one hospital during 2004, and more security, privacy, and financial problems.

    In recent months, our planet has experienced some unusual natural disasters, such as the 9.0-magnitude Indonesian earthquake that triggered a tsunami killing over 200,000 people in 11 countries around the Indian Ocean, an exceptionally heavy hurricane season in the Caribbean area, and a major mudslide in La Conchita, California. Although failures of information technology had no role in triggering these disasters, there are significant roles that IT systems could play in anticipating, detecting, and monitoring such events, and in minimizing losses of life, injuries, and consequential damages. What can we learn from such events, especially with respect to the need for adequate contingency plans?

    For example, a tsunami detection and early-warning system that is already deployed in the Pacific Ocean would seem to have been applicable in the Indian Ocean. Such a systems could have given timely warnings to millions of people, and could have saved many lives if local authorities had citizen alerts and evacuation plans in place. Our preparedness for hurricanes and typhoons is better because computer prediction of possible storm paths is getting quite good and many authorities have disaster response plans. In the case of the mudslide in the hills above La Conchita that followed an awesome sequence of rain storms, sensors in the hills were supposed to trigger advance warnings. (A similar slide had occurred in an adjacent area nine years earlier, and insurance companies had already declined to provide future coverage -- and yet people want to live there!)

    There are several problems perceived in connection with developing detection and warning systems and attempting to minimize anticipated consequential effects.

    * Institutions (especially governments, corporations, and defense departments) tend to fashion response plans for past situations rather than for potentially devastating future situations. Unless a similar disaster has recently occurred in a similar venue under similar conditions, few people worry about low-probability high-impact events. A comparable tendency holds for trustworthy computing. The most recent event remotely resembling a tsunami in networking was the 1988 Internet Worm that affected about 10% of the 60,000 Internet hosts active at the time. As a result, an emergency response team (CERT) was formed to help coordinate responses and warn of vulnerabilities. The large effort to avoid a Y2K crisis was generally successful. Today there are many new threats that could easily disable the Internet and critical infrastructures, including new viruses and terror attacks. However, because the cybersecurity equivalent of a tsunami seems extremely unlikely, there is little interest in mounting serious efforts to increase system trustworthiness and other preventive measures. The consequences of an Internet meltdown could be devasting, especially if coordinated with a terrorist attack.

    * Institutions tend to optimize short-term costs and ignore long-term consequences. (See our June 2004 column.) Also, farsighted analyses of what might happen are always subject to poor assumptions, faulty reasoning, and mandates to reach self-serving conclusions.

    * People generally do not like to make unnecessary preparations, and often resent taking sensible precautions. Repeated false warnings tend to inure them, with a resulting loss of responsiveness. Even justifiable warnings that are properly heeded (such as Y2K and boarding up for a hurricane that does not hit) are often denigrated when things later appear to have gone OK.

    It is clear that much greater attention needs to be devoted to predicting, detecting, and ameliorating both natural catastrophes and unnatural computer-related disasters, outages, and serious inconveniences. We can do much better to prevent the problems that are caused by people, and to anticipate the ones that are not.

    Peter Neumann moderates the ACM Risks Forum http://www.risks.org. See Jared Diamond, Collapse: How Societies Choose to Fail or Succeed, for an analysis of similar issues from an ecological perspective.

    ========================================================

    Inside Risks 176, CACM 48, 2, February 2005

    Responsibilities of Technologists

    Peter G. Neumann

    Around the world, our lives are increasingly dependent on technology. What should be the responsibilities of technologists regarding technological and nontechnological issues?

    * Solving real-world problems often requires technological expertise as well as sufficient understanding of a range of economic, social, political, national, and international implications. Although it may be natural to want to decouple technology from the other issues, such problems typically cannot be solved fully by technology alone. They need to be considered in the broader context.

    * Although experts in one area may not be qualified to evaluate detailed would-be solutions in other areas, their own experience may be sufficient to judge the conceptual merits of such solutions. For example, demonstrable practical impossibility or fundamental limitations of the concept, or the existence of serious conflicts of interest of the participants, or an obvious lack of personal and system-wide integrity might be considered as causes for concern.

    * Ideally, we need more open and interdisciplinary examinations of the underlying problems and their proposed solutions.

    The challenge of ensuring election system integrity illustrates these points. The election process is an end-to-end phenomenon whose integrity depends on the integrity of every step in the process. Unfortunately, each of those steps represents various potential weak links that can be compromised in many ways, accidentally and intentionally, technologically or otherwise; each step must be safeguarded from the outset and auditable throughout the entire process.

    Irregularities reported in the 2004 U.S. national election span the entire process, concerning voter registration, disenfranchisement and harassment of legitimate voters, absence of provisional ballots (required by the Help America Vote Act), mishandling of absentee ballots, huge delays in certain precincts, and problems in casting and counting ballots for e-voting as well as other modes of casting and counting votes. Some machines could not be booted. Some machines lost votes because of programming problems, or recorded more votes than voters. Some touch-screen machines reportedly altered the intended vote from one candidate to another. The integrity of the voting technologies themselves is limited by weak evaluation standards, secret evaluations that are paid for by the vendors, all-electronic systems that lack voter-verified audit trails and meaningful recountability, unaudited post-certification software changes, even run-time system or data alterations, and human error and misuse. (Gambling machines are held to much higher standards.) Other risks arise from partisan vendors and election officials. Furthermore, unusually wide divergences between exit polls and unaudited results created question in certain states. [A Diebold agent was suspected of possible tampering with the recount in Ohio, although the situation may be less sinister than that --- albeit irregular.] All of these concerns add to uncertainties about the integrity of the overall election processes.

    Some of you may wonder why, with modern technology, the voting process cannot be more robust. Whether the potential weak links are mostly technological or not, the process can certainly be made significantly more trustworthy. Indeed, it seems to be better in many other countries than in the U.S.; for example, Ireland, India, and the Netherlands seem to be taking integrity challenges seriously. As technologists, we should be helping to ensure that is the case -- for example, by participating in the standards process or perhaps by aiding the cause of the Open Voting Consortium. However, the end-to-end nature of the problems includes many people whose accidental or intentional behavior can alter the integrity of the overall process, and thus contains many nontechnological risks.

    Similar concerns also arise in many other computer-related application areas, such as aviation, health care, defense, homeland security, law enforcement, intelligence, and so on -- with similar conclusions. In each case, a relevant challenge is that of developing and operating end-to-end trustworthy environments capable of satisfying stringent requirements for human safety, reliability, system integrity, information security, and privacy, in which many technological and nontechnological issues must be addressed throughout the computer systems and operational practices. Overall, technologists need to provide adequate trustworthiness in our socially important information systems, by technological and other means. Research and development communities internationally have much to offer in achieving trustworthy computer-communication systems. However, they also have serious responsibilities to be aware of the other implications of the use of these systems.

    Peter G. Neumann moderates the on-line ACM Risks Forum (http://www.risks.org). Its annotated index (http://www.csl.sri.com/neumann/illustrative.html) includes many cases with technological and nontechnological causes.

    ========================================================

    Inside Risks 175, CACM 48, 1, January 2005

    Not Teaching Viruses and Worms Is Harmful

    George Ledin Jr

    Computer security courses are typically of two kinds. Most are of the first kind: guided tours to concepts and terminology, descriptive courses that inform and acquaint. These courses have few or no prerequisites and little technical content.

    The second kind of computer security courses is taken primarily by computer science majors. Usually elective courses, they offer a technical menu, often focused on cryptography. Systems, access control models, protocols, policies, and other topics tend to get less coverage.

    A critically important topic, viruses and worms, gets the least coverage. Anecdotal and historical information about them may be presented, but source code discussions are rare and programming a virus or worm and their antidotes is seldom required.

    Not too long ago, crypto was a taboo topic subject to government controls. Developments, such as Philip Zimmermann's PGP, helped remove these prohibitions, and serious academic research is now routine. Virus and worm programming should likewise be mainstreamed as a research activity open to students. As previously with crypto, there are barriers to overcome.

    The first barrier is the perception of danger. Bioscience and chemistry students conduct experiments with microorganisms and hazardous substances under supervised laboratory conditions. Computer science students should be able to test viruses and worms in safe environments.

    The sciences do not shy away from potentially dangerous knowledge. The West Nile virus' spread across the United States has been tracked not just by health officials but researched by students at hundreds of universities. If powerfully lethal viruses such as Ebola cannot be studied, how would vaccines or cures be developed?

    Those opposed to teaching these ``dangerous'' topics compare malicious software to explosives and weapons that are designed to kill, maim, and cause physical destruction. A course that teaches how to write malware may be analogized to a chemistry course that teaches how to make Molotov cocktails or a physics course that teaches how to build nuclear warheads. These are clearly sobering concerns.

    There is no doubt that viruses and worms are being investigated for their potential as weapons. The art of cyber war is being taught, secretly, at military academies and espionage agencies. Biological and chemical warfare are also taught. Armed forces, diplomatic management, and intelligence services must be prepared. The dangers of bioterrorism are real and serious; ``weaponized'' anthrax, for example, is a grave threat.

    Using extreme examples of biological warfare agents and nuclear bombs gives the impression that malware is very rare and exotic, when it actually is a relatively common, costly nuisance. It is more useful to compare malware to the many infectious agents that are present in daily life, household diseases such as influenza, malaria, measles, whooping cough, noroviruses, hepatitis, and cholera, which have a huge worldwide impact and are being studied in many laboratories. Research on these diseases is the foundation for our understanding of all infectious agents, from mild to deadly.

    The second barrier is moral clarity. Launching malware has serious legal consequences. Is teaching aiding and abetting? Virus and worm study must include a strong ethical component and a review of cases and legislation.

    Ethical and legal worries are obvious hurdles. Fearing that they would be held responsible for anything done by their students, and being already more than adequately overwhelmed by trying to stay with a subject that keeps accelerating, computer science faculties find little incentive to do something that they later might regret.

    Computer science students should learn to recognize, analyze, disable, and remove malware. To do so, they must study currently circulating viruses and worms, and program their own. Programming is to computer science what field training is to police work and clinical experience is to surgery. Reading a book is not enough. Why does industry hire convicted hackers as security consultants? Because we have failed to educate our majors.

    Which brings us to the third barrier: university faculties' lack of expertise. Most professors have never studied a worm or virus, much less programmed one themselves. Yet, having overcome the first two barriers, most professors will welcome the opportunity to teach the next generation of students to be malware literate. This would help not only with viruses and worms but also with other pests, such as adware, nagware, and spyware. And our graduates would contribute to the public good.

    Professor George Ledin Jr is Chair of Computer Science at Sonoma State University. He has been teaching computer security since 1975.

    ========================================================