NOTE: Reuse for commercial purposes is subject to CACM and author copyright policy.

If you wish to see earlier Inside Risks columns, those through December 2003 are directly accessible at All more recent columns are indirectly accessible there as well. 2005 columns are directly accessible at 2006 columns are directly accessible at

Inside Risks Columns, 2004

  • Spamming, Phishing, Authentication, and Privacy, Steve Bellovin, December 2004
  • Evaluation of Voting Systems, Poorvi L. Vora, Benjamin Adida, Ren Bucholz, David Chaum, David L. Dill, David Jefferson, Douglas W. Jones, William Lattin, Aviel D. Rubin, Michael I. Shamos, and Moti Yung, November 2004
  • The Non-Security of Secrecy, Bruce Schneier, October 2004
  • The Big Picture, Peter G. Neumann, September 2004
  • Close Exposures of the Digital Kind, Lauren Weinstein, August 2004
  • Insider Risks in Elections, Paul Kocher and Bruce Schneier, July 2004
  • Optimistic Optimization, Peter G. Neumann, June 2004
  • Artificial Stupidity, Peter J. and Dorothy E. Denning, May 2004
  • Coincidental Risks, Jim Horning, April 2004
  • Risks of Monoculture, Mark Stamp, March 2004
  • Outsourced and Out of Control, Lauren Weinstein, February 2004
  • Believing in Myths, Marcus J. Ranum, January 2004
  • If you wish to see earlier Inside Risks columns, up to December 2003, they are at


    Inside Risks 174, CACM 47, 12, December 2004

    Spamming, Phishing, Authentication, and Privacy

    Steve Bellovin

    It isn't news to anyone that email is becoming almost unusable. Unsolicited commercial email (spam) peddles a variety of dubious products, ranging from pharmaceuticals to abandoned bank accounts. The so-called ``phishers'' try to steal user names and passwords for online banking. And then, we have viruses, worms, and other malware. Although there are would-be solutions to these problems, some of them have the potential to do far more harm than good.

    An obvious approach to spam-fighting is some form of sender authentication. If we know who really sent the email, we can deal with it: either accept it, because it's from someone we know, or---if it's spam---we can chase down whomever sent it and take some sort of judicially-sanctioned revenge. It's simple, it's obvious---and it doesn't work, for a number of reasons. Fundamentally, most people accept---and want to accept---email from more or less anyone. Just while writing this essay, I received no fewer than five legitimate pieces of eamil, addressed directly to me, from new correspondents. It does no good to have some assurance of a total stranger's identity if you're going to accept the email anyway. Fundamentally, identity is only a concept that makes sense within a shared context --- without which the sender's authenticated identity, as opposed to the merely asserted identity, means very little.

    Of course, spammers can authenticate themselves, too. Just as today they buy throw-away domains, in a world of authenticated email they'll buy throw-away authenticated identities. Indeed, anecdotal evidence suggests that the spammers have been the fastest adopters of the prototype authenticated email schemes. We thus have the following conundrum: if you use these anti-spam techniques, statistically you're more likely to be a spammer!

    Beyond that, remember that much spam comes from hacked machines. Someone who ``0wnz'' your machine can steal your online identity quite easily, including (of course) any cryptographic keys you possess.

    If authentication techniques don't work against spam, do they help protect us from phishers? Here, at least, there is reason for optimism: a phishing attack is an impersonation attempt; if we can really authenticate the sender of the email, would we not be safe?

    Unfortunately, the proposed email authentication techniques won't do the job. What you really want is proof that ``this is the party to whom I gave my money"; all that this scheme can establish is that the sender owns some plausible domain name. It says nothing about your prior relationships We don't have to imagine this attack; one of the very first phishing incidents involved email appearing to be from; the actual domain was

    Consider, instead, a scheme where, when you opened an account, the bank sent you a copy of its certificate. This certificate could indeed be used to authenticate any email from the bank. Note the crucial difference: such a certificate is bound to a previous transaction, rather than to a name.

    Authenticated email solves some problems. If nothing else, it hinders the spread of email worms, since the infected machine will be positively identified. Furthermore, there are some situations where a list of permitted senders is in fact used. In the best of these schemes, purported identity is used to drive some sort of challenge/response scheme. Authenticated email would provide some protection here, though even without it successful ``joe jobs''---forgery of a legitimate user's identity---are relatively uncommon. The spammer would have to select source-destination pairs of addresses to bypass simple-minded permitted sender lists.

    However, there are serious disadvantages. Some are logistical: with some of the proposals, inbound mail-forwarding services such as won't work properly; people will be sending mail from more or less anywhere that claims to be from Other schemes have trouble with mailing lists, such as those that add administrative information to outbound messages.

    But the most serious problem is one of privacy. If all mail must as a practical matter, be signed, all mail becomes traceable. (Many anti-spam payment schemes share this problem.) The U.S. Supreme Court has noted that ``anonymous pamphlets, leaflets, brochures and even books have played an important role in the progress of mankind. Persecuted groups and sects from time to time throughout history have been able to criticize oppressive practices and laws either anonymously or not at all ... It is plain that anonymity has sometimes been assumed for the most constructive purposes.'' Do we want an electronic world without such advantages?

    Steven M. Bellovin is a researcher at AT&T Labs in Florham Park, NJ. }

    NOTE: The Supreme Court reference is TALLEY v. CALIFORNIA, 362 U.S. 60 (1960).


    Inside Risks 173, CACM 47, 11, November 2004

    Evaluation of Voting Systems

    Poorvi L. Vora, Benjamin Adida, Ren Bucholz, David Chaum, David L. Dill, David Jefferson, Douglas W. Jones, William Lattin, Aviel D. Rubin, Michael I. Shamos, and Moti Yung

    The recent spate of security issues and allegations of ``lost votes'' in the US demonstrates the inadequacy of the standards used to evaluate our election systems. The current standards (the FEC Voting Systems Standards) along with the revision being developed by IEEE 1583 ([Deutsch and Berger, CACM, October 2004]) are poor from another perspective: they establish a single pass/fail threshold for all systems, thereby eliminating incentives for existing suppliers to improve their products and rendering the market unattractive to new entrants. Moreover, they fail to precisely define the properties that should be required of a voting system. Instead, the standards rely on specific designs that are more than 15 years old. These legacy designs handicap promising new approaches, such as the various voter-verified printing schemes. New systems are unnecessarily burdened, while their substantial advantages go unrecognized.

    A set of well-defined properties would encourage the development and commercialization of better voting systems, especially when combined with objective ways to measure performance with respect to those properties. The overall result would then resemble the quantitative Federal ratings for automobiles, where features such as vehicle safety and fuel efficiency form a basis for Consumer Reports-style comparative tables. Similarly, specific performance rating guidelines for different aspects of voting systems would provide meaningful metrics upon which system developers could compete. Decision makers, both regulatory and purchasing, would then be free to establish their own minimums for these metrics. Such a rating system can thus cleanly disentangle the development of the technical evaluation process from the various political and regulatory processes.

    The Chair of the EAC (US Federal Election Assistance Commission), DeForest B. Soaries, Jr., recently asked the technical community for assistance in determining a new standard. This community is no stranger to the area of voting system properties and standards: a number of authors have tried to characterize requirements, and, in 2002, the Workshop on Election Standards and Technology, sponsored by NSF, AAAS, and CalTech/MIT, addressed similar issues.

    The performance properties for voting systems might include essentially the following: integrity of the votes (both voter verification, ``I can check that my vote was captured correctly'' and public verification, ``anyone can check that all recorded votes were counted correctly''); ballot secrecy (both voter privacy and resistance to vote selling and coercion); robustness (including resistance to denial of service attacks); usability and accuracy (including access for the disabled); and transparency (both of mechanism and election data).

    The inherent differences in system architectures can be characterized abstractly on two levels. First, architectures are compared by how well each can satisfy the overall properties. Then, they are characterized by the kinds of building blocks they need and by the assumptions they need to make about those blocks. At a more concrete level, a standard should provide an objective way to measure, for a particular actual system implementation, how well its building block instances ensure the properties required of them by the architecture of that system.

    Suitable performance evaluation and measurement standards already exist for several types of building blocks: FCC 47CFR shielding and emissions, FIPS rating of tamper-resistant equipment, and the Common Criteria for software. For some properties, objective and repeatable measures of overall performance can be defined. For example, the accuracy of a user interface in capturing voter intent can be experimentally tested in a practical and repeatable manner, with the result expressed as an error rate. ``Tiger team'' and code review security evaluation (while certainly not foolproof) should play a role along with ordinary reliability testing. Ideally, this process of developing the properties and characterizing architectures would be exceptionally transparent, such as that for Internet RFPs, and would be subject to appropriate peer review. The refinement and adaptation of the measurement techniques would proceed as an ongoing parallel activity.

    The EAC's request for assistance is a unique chance to positively affect the quality of our election systems, by tackling this new scientific and technical challenge and building a solid foundation. The aim should be to impact the 2006 elections, though the timing is already tight: the EAC is required to present technical recommendations to the House Administration Committee in April 2005. We, the technical community, are faced with a significant need, a rare opportunity, and a growing urgency for coordinated technical effort in this area. (See Voting Systems Performance Rating at for further details.)


    Inside Risks 172, CACM 47, 10, October 2004

    The Non-Security of Secrecy

    Bruce Schneier

    Considerable confusion exists between the different concepts of secrecy and security, which often causes bad security and surprising political arguments. Secrecy usually contributes only to a false sense of security.

    In June 2004, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission requires telephone companies to report large disruptions of telephone service, and wants to extend that to high-speed data lines and wireless networks. DHS fears that such information would give cyberterrorists a ``virtual road map'' to target critical infrastructures.

    Is publishing computer and network vulnerability information useful, or does it just help the hackers? This is a common question, as malware takes advantage of software vulnerabilities after they become known.

    The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is beneficial to security only in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost, they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is simply bad design.

    Cryptography is based on secrets -- keys -- but look at all the work that goes into making keys effective. Keys are short and easy to transfer. They're easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography's most basic principles is to assume that the algorithm is public.

    A fallacy of the secrecy argument is the assumption that secrecy works. Do we really think that physical weak points of networks are a mystery to bad guys unable to discover vulnerabilities?

    Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies denied their existence and wouldn't bother fixing them, believing in the security of secrecy. And because customers didn't know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we'll have vulnerabilities known to a few in the security community and to much of the hacker underground.

    Secrecy prevents people from assessing their own risks. Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose those that best serve their needs. Without public disclosure, companies can hide their weaknesses.

    Who supports secrecy? Software vendors such as Microsoft want to keep vulnerability information secret. The Department of Homeland Security's recommendations were loudly echoed by the phone companies. The interests of these companies are served by secrecy, not the interests of consumers, citizens, or society.

    In the post-9/11 world, we're seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures -- and even routine government operations -- secret: information about the infrastructure of plants, government buildings, and profiling information used to flag certain airline passengers; standards for the Department of Homeland Security's color-coded terrorism threat levels; even information about government operations without any terrorism connections.

    This keeps terrorists in the dark, especially ``dumb'' terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry -- to whom the government is ultimately accountable -- is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can't improve because there's no public debate or public education.

    Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks: they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy. Trying to hide that a network has hubs is futile. It's better to identify and protect them.

    We're all safer when we have the information we need to exert market pressure on vendors to improve security. We are all less secure if software vendors don't make their security vulnerabilities public, and if telephone companies don't have to report network outages. Governments operating without accountability serve their own security interests, not the people's.

    Bruce Schneier, CTO of Counterpane Internet Security, Inc., wrote Beyond Fear: Thinking Sensibly About Security in an Uncertain World, and produces Crypto-Gram -- his free monthly newsletter.


    Inside Risks 171, CACM 47, 9, September 2004

    The Big Picture

    Peter G. Neumann

    In this column we provide a high-level overview of some of the most pressing problem areas associated with risks to the constructive use of information technology. Although this may seem repetitive to those of you who have seen particular problems discussed in our previous columns, each of these topics presents numerous challenges that must be urgently confronted. The primary message of this column is that the totality of all the interrelated challenges requires concerted efforts that transcend the individual problems and that reach agreement on viable actions for the future, even where strong disagreements exist today.

    * System development practice. Readers of Inside Risks have long been aware of major failures in procuring and developing large systems, such as the Internal Revenue Service modernization efforts, US and UK air traffic control systems, and German TollCollect. Large-scale software development remains a high-risk activity.

    * Trustworthiness. System and network security, reliability, survivability, interoperability, predictable behavior, and other important attributes are for the most part not receiving enough dedicated attention. Our computer-communication infrastructures are riddled with flaws. In the absence of really serious attacks, governments and system developers seem to have been lulled into a false sense of security. Thus far, neither proprietary nor source-available system developers are sufficiently militant in satisfying critical needs. In mass-market software, the patch mentality seems to have won out over well-designed and well-implemented systems.

    * The Internet. Increasingly, many enterprises are heavily dependent on the Internet, despite its existing limitations. Internet governance, control, and coordination create many contentious international problems. Worms, viruses, and other malware are often impediments, as is the ubiquitous spam problem. The Internet infrastructure itself is susceptible to denial of service attacks and compromise, while the lack of security and dependability of most attached systems also creates problems (e.g., open relays).

    * Critical infrastructures. Despite past recognition of the pervasiveness of serious vulnerabilities, critical national infrastructures are typically still vulnerable to attacks and accidental collapses. For example, massive power outages are still not unusual, despite supposed improvements.

    * Privacy. Desires for homeland security have typically postulated that it is necessary to sacrifice privacy in order to attain security, although this is highly debatable. Sacrificing privacy does not necessarily result in greater security. Furthermore, serious inroads to privacy protection have occurred that may be very difficult to reverse. Surveillance is becoming more widespread, but often without adequately respecting privacy concerns -- as illustrated by the PATRIOT Act. Legitimate needs for anonymity or at least pseudoanonymity seem to be suppressed.

    * Accountability. Oversight of computer activities is often as weak as oversight of corporate practices. On the other hand, audit mechanisms must also respect privacy needs. As one example, we have often noted in previous Inside Risks columns that today's unauditable all-electronic voting systems are seriously lacking in accountability -- in fact, they provide no meaningful assurances that votes are correctly recorded and processed. (The October 2004 special issue of the CACM is devoted to the integrity of election systems.)

    * Intellectual property. Entertainment industry efforts have sought fairly draconian copyright policies that run counter to consumer interests -- and, in the eyes of some analysts, contrary to good economics. The Inducing Infringement of Copyrights Act of 2004 (INDUCE) is highly controversial. More sensible policies are desperately needed.

    * Education. U.S. university curricula in software engineering and trustworthy systems seem to be less responsive to needs of critical systems than in certain other countries. Furthermore, significant decreases have recently been reported in U.S. undergraduate computer-science student enrollments, perhaps because of a noticeable reduction in foreign students. This situation has serious long-term implications worldwide.

    As noted above, it is the totality of these problems in the large that is of primary concern. Simplistic local approaches are not effective. (Recall the discussion of the risks of optimistic optimization in the June 2004 column.) Much greater foresight and serious system-oriented thinking are urgently needed, along with private-public cooperation.

    Peter G. Neumann moderates the on-line ACM Risks Forum, which provides many specific examples of the generalizations in this column. His Illustrative Risks compendium provides an annotated index to relevant cases. His Web site also contains various papers and reports on how we might overcome many of these problems.


    Inside Risks 170, CACM 47, 8, August 2004

    Close Exposures of the Digital Kind

    Lauren Weinstein

    Sometimes the impacts of powerful technologies are relatively clear and pretty much expected. For example, we all realize that nuclear bombs are capable of impacting the world in drastic and dramatic ways via their very existence, even when not detonated.

    But some technologies, even seemingly ordinary consumer products, can impact global events and society in unexpected ways, and the risks they present to the status quo may be surprising indeed -- to the technologists who create them, the firms that market them, and the consumers who use them.

    A particularly notable example from today's headlines is the digital camera. Although still a relatively new technology, these cameras have already become ubiquitous. Used by professional photographers and casual amateurs alike, they can hide in pockets, swing from keyrings, and have been embedded into an astounding array of cell phones. These cameras combine many complex aspects of leading-edge computing and imaging technologies in ways that were unimaginable a few years ago at the price points now commonplace.

    But digital cameras have special attributes that make them far more than mere tourist accessories, and can result in qualitative differences from their film-based cousins. With their ability to take large numbers of photos often of extremely high quality and at essentially zero cost per shot, digital cameras encourage the capturing of scenes that might otherwise have been left as ephemeral events that never would have been saved for posterity if film were the only choice available.

    Since no film developing is required, these virtual piles of photos can be shot without concerns about what nosy film processors might think of their images. What's more, digital still shots and even digital movies can be quickly copied and easily modified, are trivial to archive in quantity on cheap CDROMs, and can be instantly transmitted via e-mail around the planet. Their power to rapidly influence events is enormous, and only now really becoming understood.

    The expanding exposures of Iraqi prisoner abuse and torture are a case in point. It's one thing to simply read an announcement that an investigation into abuse is being initiated. It's something else again to actually see a naked prisoner leashed like an animal or other prisoners ordered into poses reminiscent of cheap porn films, not to mention the even more disturbing related images that are yet to be made public.

    If digital cameras did not exist, it's unlikely that the photos and movies that triggered this firestorm would have been created in such quantity and explicit detail. It's even less likely that such images would have been archived en masse and so found their way to public scrutiny. In this case, digital photos turned what might otherwise have been a mere military press release into a public outrage likely to change the course of history in significant ways.

    Of course, like most other technologies, digital cameras are merely tools -- to be used for good or ill. And as we've seen, the photographer can't always control how shots will ultimately be used or what effects those images are likely to impart.

    Thus, we can see digital cameras as another of those quintessential technological developments whose potential power far exceeds the sum of its parts. On one hand, they are capable of being used to invade individuals' privacy in devastating ways, with cell-phone cameras now a particular focus of concerns, given their wireless connectivity capable of rapid photo transmissions. Yet, digital cameras may also be used to expose nightmarish crimes and horrors of all sorts. It appears that this technology has become the tool of choice for whistleblowers and intelligence operatives alike, and for all manner of shutterbugs in between.

    While there are many observers who laud the role of these cameras, there will be also be those who blame such devices for any and all perceived negative impacts. But as with so many inventions down through history, the rapidly evolving capabilities of digital cameras ultimately cannot be successfully suppressed nor their powers revoked, even if we wanted to do so.

    In many respects, we still have not learned how to live in a balanced manner with many advanced technologies -- not just digital cameras. The true path to enlightenment in this regard is through legislation, laws, and common sense, not by endeavoring to blame the messengers for images many would rather not see, sounds that most might wish not to hear, and insight into our own flaws that nearly all of us probably would much prefer to ignore.

    Lauren Weinstein ( is co-founder of People For Internet Responsibility He moderates the Privacy Forum .


    Inside Risks 169, CACM 47, 7, July 2004

    Insider Risks in Elections

    Paul Kocher and Bruce Schneier

    Many discussions of voting systems and their relative integrity have been primarily technical, focusing on the difficulty of attacks and defenses. This is only half of the equation: it's not enough to know how much it might cost to rig an election by attacking voting systems; we also need to know how much it would be worth to do so. Our illustrative example uses the most recent available U.S. data, but is otherwise is not intended to be specific to any particular political party.

    In order to gain a clear majority of the House in 2002, Democrats would have needed to win 13 seats that went to Republicans. According to Associated Press voting data, Democrats could have added 13 seats by swinging 49,469 votes. This corresponds to changing just over one percent of the 4,310,198 votes in these races and under 1/1000 of the 70 million votes cast in contested House races. The Senate was even closer: switching 20,703 votes in Missouri and New Hampshire would have provided Democrats with the necessary two seats.

    Of course, it isn't possible to anticipate exactly how much fraud or undetected error would alter the winner of each race. It would also be suspicious if Democrats won 13 districts by exactly one vote. As a result, a modest number of additional votes would need to be changed. In 2002, fraud that changed 2% of the votes in a few contested races (or 1/250 of the total votes) would have completely changed the balance of power in Congress.

    According to the Federal Election Commisssion, some House candidates spent up to $8 million in 2002, although expenditures of $3 to $4 million were typical. Thus, it is easily worth $3 million for a candidate to change a race from a statistical dead heat into a certain victory. Each 1% that is added to a candidate's odds of victory (and hence each 1% removed from the opponent's odds) is worth $60,000.

    The outcomes of the 13 closest Democratic losses in 2002 would have changed by swinging an average of 3,805 votes each. If shifting 5,000 votes is worth $3 million, each vote is worth $600. A discount is required to reflect the additional legal risks and moral problems involved in committing fraud, although these effects depend on the people and situations involved. The following analysis makes the conservative assumption of $400 per vote.

    So, what is it worth to compromise a voting machine? Suppose one machine collects 250 votes, with roughly half for each candidate in a close election. Rigging the machine to swing all of its votes in one race would be worth $50,000. To avoid detection, fraudsters may be less greedy. Swinging 10% of the opposition's votes on any given machine would be worth $5,000 in a close race. Thus, it is necessary to assume that attacks against individual voting machines are a serious risk, particularly if a few dozen machines could be affected. For example, machine tampering is worthwhile if machines are stored without strong physical security.

    Election data is also useful for understanding the threats against voting machine designs. Any voting machine type deployed in 25% of precincts would register enough votes that malicious software could swing the balance of power without creating detectable statistical abnormalities. According to the FEC, Congressional candidates together legally raised over $600 million in 2002. One might conservatively estimate that stealing control over the House of Representatives is worth over $100 million to the party that would otherwise lose. (Of course, official candidates and parties need not be involved or even aware of fraud; unscrupulous ``supporters'' can make the arrangements.) In practice, the threats are even greater, since one attack could affect many elections.

    Who are the adversaries? Elections face threats from system developers and election insiders, foreign governments, radical extremists, partisan operatives, and others. Voting systems must be able to withstand attackers with extraordinary creativity and dedication -- much more so than the rather simplistic and unmotivated creators of viruses and worms -- because there are strong rational (though perverse) motives for election fraud. Compared with violence and other illegal activities extremists use, electoral fraud is much safer and much more likely to have a desired effect.

    The evidence clearly shows voting systems must be designed to counter very well-funded and sophisticated opponents, including those with massive financial resources and the ability to join design teams, infiltrate manufacturing facilities, fabricate malicious integrated circuits, tamper with compilers, and mount a wide range of other attacks. Checks and balances, such as local party observers, help against some attacks but not others. The threats are real, and openness and verifiability are critical to election security.

    Paul Kocher heads Cryptography Research Inc. (; Bruce Schneier is CTO of Counterpane


    Inside Risks 168, CACM 47, 6, June 2004

    Optimistic Optimization

    Peter G. Neumann

    Many people continue to ignore the long-term implications of decisions made for short-term gains, often based on overly optimistic or fallacious assumptions. In principle, much greater benefits can result from far-sighted vision based on realistic assumptions. For example, serious environmental effects (including global warming, water and air pollution, pesticide toxicity, and adverse genetic engineering) are largely ignored in pursuit of short-term profits. However, conservation and environmental protection appear much more relevant when considered in the context of long-term costs and benefits. Also, governments are besieged by intense short-sighted lobbying by special interests. Insider financial manipulations have serious long-term economic effects. Research funding has been increasingly focusing on short-term returns, to the detriment of the future.

    Computer system development is a particularly frustrating example. Most system developers are unable or unwilling to confront life-cycle issues up front and in the large, although it is clear to many of us that up-front investments can yield enormous benefits later in the life cycle. In particular, defining requirements carefully and wisely at the beginning of a development effort can greatly enhance the entire subsequent life cycle and reduce its costs. This process should ideally anticipate all essential requirements explicitly, including (for example) security, reliability, scalability, and relevant application-specific needs such as enterprise survivability, evolvabilitity, maintainability, usability, and interoperability. Many such requirements are typically extremely difficult to add once system development is far advanced. Furthermore, requirements tend to change; thus, system architectures and interfaces should be relatively flaw-free and inherently adaptable without introducing further flaws. Insisting on principled software engineering (such as modular abstraction, encapsulation, and type safety), sensible use of sound programming languages, and use of appropriate support tools can significantly reduce the frequency of software bugs. All of these up-front investments can also reduce the subsequent costs of debugging, integration, system administration, and long-term evolution -- if sensibly invoked.

    The value of up-front efforts is a decades-old concept. However, it is often widely ignored or done badly, for a variety of reasons -- such as short-term profitability, rush to market, lack of commitment to quality, lack of liability concerns, ability to shift late life-cycle costs to customers, inadequate education, experience, and training, and unwillingness to pursue other than seemingly easy answers.

    Overly optimistic development plans that ignore these issues tend to win out over more realistic plans, but can lead to difficulties later on -- for developers, system users, and even innocent bystanders. The annals of the Risks Forum are littered with systems that did not work properly and people who did not perform according to the assumptions embedded in the development and operational life-cycles. (An example is seen in the mad rush to low-integrity paperless electronic voting systems with essentially no operational accountability.) As we have noted here before, the lessons of the RISKS archives are widely ignored. Instead, we have a caveat emptor culture, with developers and vendors disclaiming all warranties and liability.

    There are many would-be solutions that result in part from short-sighted approaches. Firewalls, virus checkers, and spam filters all have some benefits, but also some problems. Firewalls could be more effective if they did not pass all sorts of executable content, such as ActiveX and JavaScript -- but many users want those features enabled. (To date, viruses and worms have been rather benign, considering the full potentials of really malicious code.) However, active content and malware would be much less harmful in a well-architected environment that could sand-box executable content. Spammers adapt rapidly to whatever defenses they encounter; legislation seems too simplistic to make real inroads against them, and may simply drive them offshore.

    We need better incentives to optimize in larger contexts and for the long term, with realistic assumptions and appropriate architectural flexibility to adapt to changing requirements. Achieving this will require many changes in our research and development agendas, our software and system development cultures, our educational programs, our laws, our economy, our commitment, and perhaps most important -- in obtaining well documented success stories to show the way for others. Particularly in critical applications, if it's not worth doing right, perhaps it's not worth doing at all. But as David Parnas has said, let's not just preach motherhood; let's teach people how to be good mothers.

    Peter Neumann moderates the ACM Risks Forum


    Inside Risks 167, CACM 47, 5, May 2004

    Artificial Stupidity

    Peter J. and Dorothy E. Denning

    (The Year is now 2100.)

    "Great-Grandma, my history teacher mentioned computers today. What were they?" So asked Ancath.

    "Yes, I remember them. They were among us when I was a child. My own grandfather was among the original inventors. They were everywhere and calculated everything. They were part of life. The biggest invention of all was called The Internet. It connected all computers in our homes, our towns, our cities, and even our colonies on Moon, Mars, and Europa."

    "But Great-Grandma, what happened to it all?"

    "It was a sad story. From the beginning, the inventors dreamed of building computers that would be like people -- thinking, reasoning, understanding. They predicted they would achieve such artificial intelligences by 2030, when they expected to be able to build computers the size and power of a brain. Yet, no matter how hard they tried, it seemed that every computer did really stupid things, making mistakes that injured people, confusing their identities, or putting them out of business. In their endless quest for an artificial intelligence, the inventors started with simple things for everyday business and personal life: automated chauffeurs, pilots, radar cops, toll collectors, voice- menus, receptionists, call directors, reservation agents, help technicians, and complaint specialists; but these computers were invariably uncompassionate, insensitive, and error-prone. At first they thought the problem was a lack of computing power and an insufficient experience database. But by 2025, they had more computing power than any brain and more data than could be stored in a brain; that did not help. Believing that the problem was too few computers connected, the inventors offered their talents to the US Government, which in 2025 announced its intention to fully automate. They automated entire bureaucratic departments, replacing staffs of thousands with a single computer that did the same job. When the first chip containing the algorithms of government came off the production line, tongue-in-cheek politicians announced it as an historic breakthrough in the long quest to shrink government. They hailed it as an important step toward efficiency and cost-saving. Hundreds of thousands of Federal workers were laid off in 2030 when the automated government system came on."

    "That sounds pretty incredible, Great-Grandma!" said Ancath. "But what happened to it?"

    "As it turned out, they had created not artificial intelligence, but artificial stupidity. Soon the automated DEA started closing down pharmaceutical companies saying they were dealing drugs. The automated FTC closed down the Hormel Meat Company, saying it was purveying spam. The automated DOJ shipped Microsoft 500,000 pinstriped pants and jackets, saying it was filing suits. The automated Army replaced all its troops with a single robot, saying it had achieved the Army of One. The automated Navy, in a cost saving move, placed its largest-ever order for submarines with Subway Sandwiches. The FCC issued an order for all communications to be wireless, causing thousands of AT&T installer robots to pull cables from overhead poles and underground conduits. The automated TSA flew its own explosives on jetliners, citing data that the probability of two bombs on an airplane is exceedingly small.

    "Within ten years, the automated Federal Government had made so many mistakes, bankrupted so many businesses, and messed up so many lives that a great economic depression came upon the world. People everywhere were out of work; pollution, crime, homelessness, and hardship ran rampant. Finally, in 2050 a group of graybeard programmers -- who remembered enough about the programming of the automated Government system -- created a solution. They built an Automated Citizen, which they trained to be helpless and adoring of authorities, and they installed one on every Internet port. Soon the automated government was completely occupied with taking care of automated citizens; and it left all the people alone. With the Government out of their lives, people forged a new, free society, enabling us to celebrate this lovely Christmas here today."

    "Oh Great-Grandma, that is so wonderful! What a great story and happy ending! I love you!"


    ABOT1: I think I'm finally getting the hang of programming inter-citizen interactions. What do you think?

    ABOT2: It is stupid.

    Peter J. Denning is Chairman of the Computer Science Department at the Naval Postgraduate School in Monterey, California. He is a past president of ACM.

    Dorothy E. Denning is a Professor of Defense Analysis at the Naval Postgraduate School in Monterey, California.


    Inside Risks 166, CACM 47, 4, April 2004

    Coincidental Risks

    Jim Horning

    The story of the Aceville elections has received some attention in the national press, but it is worth considering from a Risks perspective. This column is based on reports by AP (Affiliated Press, Unusual Election Results in Ohio Town, 2/30/04) and Rueters (Losers Question Ohio Election, 2/30/04). The Aceville, OH, municipal elections last February -- the city's first time using the SWERVE electronic voting system -- led to the election of the alphabetically first candidate in all 19 races. This is an astonishing coincidence. Furthermore, every winning candidate, and Measure A, garnered 100% of the votes counted.

    ``I am extremely gratified by this mandate,'' said mayor-elect Neuman E. Alfred, who received 7,215 votes in a town with 7,213 registered voters. ``This is the greatest electoral landslide since the re-election of Iraqi President Saddam Hussein.''

    Byron Augusta, CEO of Advanced Automatic Voting Machines (AAVM), which supplied the SWERVE system, denied that there was anything suspicious about the coincidence that Alfred was also the AAVM technician in charge of the new voting machines. ``We are confident of the integrity of our employees, which is reflected in their unblemished record of electoral success. Reports that Alfred installed undocumented `software patches' the day before the election are completely unfounded. We could prove this to you, except that the machines now contain the software upgrade that Alfred installed the day after the election. Anyhow, our software was once certified tamper-proof by the Federal Election Commission. Any suggestion of hanky-panky is scurrilous and un-American. We were unquestionably the low-cost bidder.''

    Ohio Supervisor of Elections Ava Anheuser expressed no surprise that the alphabetically first candidate won every race. ``Don't you believe in coincidence?'' she asked. ``This is an example of Adam Murphy's Law: `If it's logically possible, sooner or later it's bound to happen.' AAVM downloaded the totals from the voting machines three times. There's nothing else to recount.''

    Rueters reported that several voters claimed to have voted for losing candidates, including mayoral candidate Zeke Zebronski, who said, ``I know this election was crooked. I voted for myself three times, and still got no votes.'' However, the Aceville Advertiser conducted an investigation and concluded that the complaints were the work of ``a small group of out-of-town academic Luddites with a paper fetish,'' and ``an even smaller group of agitators for `alphabetic equality'.'' ``They should remember that `America' starts and ends with A,'' chided Advertiser Editor-in-Chief Ada Augusta.

    Pundits are divided on whether this election was a statistical fluke, or is the harbinger of a statewide, or even national, trend. But many politicians are taking no chances. The Democratic Party is scrambling to find an A presidential candidate. ``We just don't see how Clark or Dean can beat Bush in this environment,'' said party spokeswoman April Brown. The newly-renamed All American Party's entire Ohio slate has filed to legally change their names, to Aaron Aaren, Abigail Aaren, etc. ``It's like one big family,'' said party secretary Absalom Aaren, ``and we expect to do very well in the next election.''

    The American Association of the Mentally Challenged has pressed for national adoption of the SWERVE system. Spokeswoman Ada Augusta stressed that ``This is the only available system that guarantees that your vote will be counted, whether you can cast it or not. And it will bring jobs to Aceville.''

    Measure A provided tax-exempt bond funding for the Aceville Automation Park, which will house new headquarters for both AAVM and the Advertiser.

    On a lighter note, the American Automobile Association was elected Dog Catcher, even though it wasn't on the ballot. ``This seems to be the first time a write-in candidate has been elected without any write-ins,'' said an AAA insider, who spoke on condition of anonymity.

    Regular readers of ``Inside Risks'' know that there is an important distinction between coincidence and causality. The fact that A preceded B does not mean that A caused B. The order of the candidates probably didn't influence enough voters to change Aceville's landslide results. However, ``out of an abundance of caution,'' election officials should have followed the advice of People for Randomized Alphabetic Ballots (PRAY4Ps). Putting names on the ballot in random order preserves faith in the fairness of the election. Of course, it is still possible for a random permutation to leave names in alphabetical order. Wouldn't that be a coincidence? I'd be happy to Risk It.

    Jim Horning is a member of the American Association for April Foolishness, and a co-founder of PRAY4Ps.


    Inside Risks 165, CACM 47, 3, March 2004

    Risks of Monoculture

    Mark Stamp

    In August 2003, W32/Blaster burst onto the Internet scene. By exploiting a buffer overflow in Windows, the worm was able to infect more than 1.4 million systems worldwide in less than a month. More diversity in the OS market would have limited the number of susceptible systems, thereby reducing the level of infection. An analogy with biological systems is irresistible.

    When a disease strikes a biological system, a significant percentage of the affected population will survive, largely due to its genetic diversity. This holds true even for previously unknown diseases. By analogy, diverse computing systems should weather cyber-attacks better than systems that tend toward monoculture. But how valid is the analogy? It could be argued that the case for computing diversity is even stronger than the case for biological diversity. In biological systems, attackers find their targets at random, while in computing systems, monoculture creates more incentive for attack because the results will be all the more spectacular. On the other hand, it might be argued that cyber-monoculture has arisen via natural selection---providers with the best security products have survived to dominate the market. Given the dismal state of computer security today, this argument is not particularly persuasive.

    Although cyber-diversity evidently provides security benefits, why do we live in an era of relative computing monoculture? The first-to-market advantage and the ready availability of support for popular products are examples of incentives that work against diversity. The net result is a ``tragedy of the (security) commons'' phenomenon -- the security of the Internet as a whole could benefit from increased diversity, but individuals have incentives for monoculture.

    It is unclear how proposals aimed at improving computing security might affect cyber-diversity. For example, increased liability for software providers is often suggested as a market-oriented approach to improved security. However, such an approach might favor those with the deepest pockets, leading to less diversity.

    Although some cyber-diversity is good, is more diversity better? In this regard, perhaps the ``good guys'' can take a cue from the ``bad guys''. Virus writers in particular have used diversity to their advantage. Polymorphic viruses are currently in vogue. Such viruses are generally encrypted with a weak cipher, using a new key each time the virus propagates, thus confounding signature-based detection. However, because the decryption routine cannot be encrypted, detection is still possible. Virus writers are on the verge of unleashing so-called metamorphic viruses, where the body of the virus itself changes each time it propagates. This results in viruses that are functionally equivalent, with each instance of the virus containing distinct software. Detection of metamorphic viruses will be extremely challenging.

    Is there defensive value in software diversity of the metamorphic type? Consider the following thought experiment. Suppose we produce a piece of software that contains a common vulnerability, say, a buffer overflow. If we simply clone the software -- as is standard practice -- each copy will contain an identical vulnerability, and hence an identical attack will succeed against each clone. Instead, suppose we create metamorphic instances, where all instances are functionally equivalent, but each contains significantly different code. Even if each instance still contains the buffer overflow, an attacker will probably need to craft multiple attacks for multiple instances. The damage inflicted by any individual attack would thereby be reduced and the complexity of a large-scale attack would be correspondingly increased. Furthermore, a targeted attack on any one instance would be at least as difficult as in the cloning case.

    Common protocols and standards are necessary in order for networked communication to succeed and, clearly, diversity cannot be applied to such areas of commonality. For example, diversity cannot help prevent a protocol-level attack such as TCP SYN flooding. But diversity can help mitigate implementation-level attacks (e.g., exploiting buffer overflows). As with many security-related issues, quantifying the potential benefits of diversity is challenging. In addition, metamorphic diversity raises significant questions regarding software development, performance, maintenance, etc. In spite of these limitations and concerns, there is considerable interest in cyber-diversity, both within the research community and in industry; for an example of the former, see and for examples of the latter, see the Web site or Microsoft's discussion of individualization in the Windows Media Rights Manager.

    Mark Stamp (, Assistant Professor of Computer Science at San Jose State University, recently spent two years working on diverse software for MediaSnap, Inc.


    Inside Risks 164, CACM 47, 2, February 2004

    Outsourced and Out of Control

    Lauren Weinstein

    Outsourcing (farming out production or other work) is not new. But when advanced technologies such as telecommunications and computing are applied to outsourcing, along with vast differences in pay around the world, the results can be unfair, unwise, alarming, and even dangerous. While frequently providing significant "productivity" enhancements, the associated negative risks include domestic unemployment and underemployment; privacy, security, and reliability concerns; and other serious problems.

    In the past, the outsourcing of manufacturing jobs has been qualitatively different from the very broad and rapidly expanding sort of outsourcing we're seeing today. Most of the vast range of jobs now being exported to inexpensive foreign labor markets from "higher-wage" countries would be impractical to outsource in such a manner without today's inexpensively accessed communications and data infrastructures.

    Improved corporate bottom lines are often cited to justify outsourcing. But the devastating impact of lost domestic employee positions cannot reasonably be ignored. Former employees are understandably bitter when their jobs are outsourced to foreign workers being paid only a small fraction of domestic wages. In at least one case, a company has used the threat of outsourcing to demand that their local employees accept basically the same low wages as foreign workers.

    The ability to cheaply communicate across the globe via voice and data networks has permitted vast outsourcing of customer service, health-care transcription and medical information processing, financial and systems analysis, software development, and many other extremely technical and sophisticated tasks. Highly skilled domestic workers, including many who probably have read this column on a regular basis, have seen their livelihoods lost to technologically enabled outsourcing and are now competing with teenagers for low-pay, unskilled jobs.

    As more customer-support call centers move to non-domestic locations, complaints from consumers about poor service rise. In many cases, language-related barriers cause communications difficulties. Computer manufacturer Dell, Inc. recently announced it would cease using its India-based call centers for corporate customers due to such complaints. However, ordinary non-corporate Dell consumers may still find themselves routed to offshore customer service representatives. (See for illuminating information about India-based call centers.)

    Large-scale outsourcing is growing at a frenetic pace around the globe. Many outsourced jobs involve countries where significant privacy laws do not exist; even if those laws are improved under pressure of potential lost business, effective enforcement would still appear to be highly problematic. Customer service outsourcing can give risky access to data such as names, addresses, Social Security numbers, telephone call records, and medical information. Recently, a Pakistani subcontract worker threatened to post U.S. patients' medical data on the Web if claimed back pay was not forthcoming.

    Software, sometimes of a critical nature, is now routinely subcontracted to foreign outsourced environments, bringing risks of development miscommunication or worse. The U.S. General Accounting Office noted the possibility of malicious changes to code since significant U.S. air traffic control system Y2K work had been subcontracted outside the U.S. without mandated background checks.

    There are even moves to outsource computer system administration to foreign centers, often in countries with poor (if any) computer security laws, creating the possibility of massive abuse of domestic systems by distant persons who could be difficult or impossible to effectively prosecute. Thanks to subcontracting, you might not even know that the company managing your system is using such facilities and personnel.

    There are many fine workers performing outsourced tasks around the world. Yet, it is more difficult to maintain control over customer information, security, development, and other critical issues, when work is performed distantly or under completely different laws. The opportunities for errors, mischief, and serious misdeeds are alarming, to say the least. Businesses and governments need to carefully consider the manners in which outsourcing can be reasonably exploited, and how it must be controlled.

    Lauren Weinstein or or
    Tel: +1 (818) 225-2800
    Co-Founder, PFIR - People For Internet Responsibility -
    Co-Founder, URIICA - Union for Representative International Internet
         Cooperation and Analysis -
    Moderator, PRIVACY Forum -
    Member, ACM Committee on Computers and Public Policy

    [Note added by PGN: Bob Herbert's Op-Ed column on offshore outsourcing, Education is No Protection, in The New York Times, 26 Jan 2004, has this poignant paragraph: ``You want a national security issue? Trust me, this threat to the long-term U.S. economy is a big one. Why it's not a thunderous issue in the presidential campaign is beyond me.'' But PGN notes that this is not just a U.S. issue. It is multinational.]


    Inside Risks 163, CACM 47, 1, January 2004

    Believing in Myths

    Marcus J. Ranum

    We love our myths. In fact, we get downright grumpy when they are challenged. Unfortunately, in order to make progress, we sometimes need to overcome our dearly-held perceptions, confront hard reality, and try to come up with new approaches grounded in that reality. In computer security, we don't do a particularly good job of that. In fact, we often fall into a deadly cycle I like to call ``If something doesn't work, try it harder''. Rather than rethinking our approaches, we try to squeeze additional efficiencies out of methods that are already of questionable effectiveness.

    Let's take for example the problem of computer viruses. The first-generation anti-virus products required users to periodically update their knowledge bases. Consequently, viruses got past them fairly often. Users, we discovered, want to install software and forget about it. In response, the current generation of anti-virus products automatically update themselves thanks to the now-ubiquitous Internet connection. This overcame the problem of user forgetfulness, but - did it solve the virus problem? If something doesn't work, try it harder. I'm not saying that anti-virus products are bad, but if you look at the overall approach implemented in this model, it's a ``fail open'' design. The default behavior of virtually all desktop operating systems favors the user's ability to execute any software, regardless of its provenance! You might think that ``fail closed'' would make more sense! Why haven't we fully explored the option of having the system run only code that has been authorized, and preventing everything else? It might be a little paradigm shift, but it's fundamentally a more solid model. In fact, some early anti-virus products did implement fail-closed execution, but they weren't popular because they were too restrictive. Now, perhaps, we wonder if our execution environments are restrictive enough.

    Then there's the myth of the network firewall. First-generation firewalls were extremely restrictive devices that passed only a few kinds of traffic. Generally the traffic was only what could safely be transferred across a network boundary. Customers didn't like them because they were too "restrictive", and replaced them with more permissive devices that allowed fat streams of HTTP traffic back and forth. And, for the last 10 years, it's been open season on corporate networks. Now, a new generation of content-filtering application gateways (``firewalls'' by any other name are still firewalls) is coming into vogue, and they're re-implementing the original fail-closed firewall design. First-generation firewalls were too restrictive but, as it turns out, they were restrictive enough.

    However, in the meantime we do ourselves incredible damage by believing in the myths of goodness we attach to these products and approaches. ``If we just make it a little faster'' or ``If we just add a few more rules and signatures'' blinds us to the fact that we've chosen to try to walk a path of fail-open in a world where safety comes from failing closed. In a recent SANS Newsbytes posting, I recall reading 5 listings of ``important network blah gets taken off the air by worm blah'' a distressing indication that we're believing our myths too well. It's not just a matter of tweaking our firewall rules -- some networks should not have firewalls at all: they should not be connected to other networks under any circumstance. It's not a matter of trying harder, sometimes it's a matter of not trying at all. As an industry we invest massive amounts of time and effort in patching security flaws in Internet-facing applications to keep hackers at bay; perhaps it's time to consider using server applications that are designed for security on our internet-facing systems, instead of convenient off-the-shelf shovelware. No amount of patching can turn software that was never designed to be secure into secure software - yet we invest so much time and effort in doing so that we could have easily amortized the up-front cost of buying the right tools for the job in the first place.

    So what should we do? When you run across a problem area where you see generations of incremental improvements question the underlying assumptions of the approaches that are in use. Ask yourself, and ``what fundamental approaches yield a solution?'' Don't just pursue the next incremental improvement to the current approach if it distracts you from solving the underlying problem. As I look back at the computer security industry and all the products that have come and gone within it, I realize that most of them are band-aids that promise us a myth of open access with absolute security. I'm beginning to realize that a better approach is to try to figure out the fundamentals, then address the human factors that make doing things right unpalatable for our end users. How can we make it easier to run a truly restrictive firewall? How can we make it easier to run desktop software that offers only restricted execution? How can we make it easier to run mission-critical networks that a re disconnected from each other? Most of the interesting problems in computer security happen because doing it right is inconvenient. That, perhaps, is the most damaging myth of security: that security has to be inconvenient. Let's explode that myth!

    Marcus Ranum is a long-time computer security industry insider and the author of The Myth of Homeland Security, Wiley, 2004.