These are the two primary sections on computer-aided voting in Computer-Related Risks, Addison-Wesley, 1995. More recent book material is on my Website, http://www.csl.sri.com/neumann.

1 Risks in Computer-Based Elections

Errors and alleged fraud in computer-based elections have been recurring Risks Forum themes. The state of the computing art continues to be primitive. Punch-card systems are seriously flawed and easily tampered with, but are still in widespread use. Direct recording equipment is also suspect, with no ballots, no guaranteed audit trails, and no real assurances that votes cast are properly recorded and processed. Computer-based elections are being run or considered in many countries, including some countries notorious for past riggings. The risks discussed here exist worldwide.

1.1 Erroneous Election Results, Presumably Accidental

Computer-related errors occur with alarming frequency in elections. In 1989, there were reports of uncounted votes in Toronto and doubly counted votes in Virginia and in Durham, North Carolina. Even the U.S. Congress had difficulties when 435 Representatives tallied 595 votes on a Strategic Defense Initiative measure. An election in Yonkers, New York, was reversed because of the presence of leftover test data that accumulated into the totals. Alabama and Georgia also reported irregularities. After a series of mishaps, Toronto has abandoned computer-based elections altogether. Most of these cases were attributed to "human error" rather than to "computer error," and were presumably due to operators and not to programmers; however, in the absence of dependable accountability, who can tell?

In 1992, there were numerous further goofs. In Yamhill County, Oregon, votes for the District Attorney candidates were reversed. Because the computer program assumed the candidates were in alphabetical order, a two-to-one landslide went to the wrong candidate (at least until the error was caught).

In Ventura County, California, yes and no votes were reversed on all the state propositions, although the resulting errors did not alter the statewide outcomes.

Ray Todd Stevens reported on voting machines that were misaligned so that it was possible to vote for Bush but not for Clinton. (A voting official said that "they only deal with the totals and it would all average out.")

1.2 Election Fraud

If wrong results can occur accidentally, they can also happen intentionally. Rigging has been suspected in various elections. In other cases, fraud might easily have taken place. Numerous experts have attested to the ease with which close elections could be rigged. However, lawsuits have been unsuccessful, particularly given the absence of trustworthy audit trails.

The opportunities for rigging elections are manifold, including the installation of trapdoors and Trojan horses -- child's play for vendors and knowledgeable election officials. Checks and balances are mostly placebos, and are easily subverted. Incidentally, Ken Thompson's oft-cited Turing address [10] noted in Section  reminds us that tampering can occur even without any source-code changes; thus, code examination is not sufficient to guarantee the absence of Trojan horses.

For many years in Michigan, manual system overrides were necessary to complete the processing of noncomputer-based precincts, according to Lawrence Kestenbaum.

Doug Hardie cited a personal experience in observing a flagrant case of manual intervention with a punched-card voting system at San Jose State University -- albeit in only a small election. (He also had a cute comment on the linguistic distinction between the election machine being fixed and its being repaired.)

In the St. Petersburg, Florida, mayoral election on March 23, 1993, some computer printouts "released inadvertently by election officials showed 1429 votes had been cast in Precinct 194 [which has zero registered voters], raising questions of vote tampering." This anomaly was explained as a compilation of "summary votes" that were legitimate but that had been erroneously allocated in copying the results from one computer system to another. Coincidentally (?), the margin of victory was 1425 votes. A subsequent recount seemed to agree with the original totals.1 On the other hand, controversy continued for months afterward. By the way, a consistent Trojan horse somewhere in the process would of course give reproducible results; the concurrence of a recount is by itself not enough.

In addition to various cases of fraud having been alleged in the use of electronic voting systems (see, for example, Ronnie Dugger's  New Yorker article [1]), paper-ballot systems are of course vulnerable as well. A recent example involves Costilla County in Colorado, where absentee ballots have been voted for ineligible voters (friends, relatives, and dead persons) as far back as 1984, and have been decisive. However, that hardly seems like news here. The conclusion is that, in principle, all voting systems are easy to rig -- given appropriate access to whatever it takes (computer systems, bogus paper ballots, and so on). Trojan horses can in essence be arbitrarily subtle.

John Board at Duke University expressed surprise that it took more than 1 day for the doubling of votes to be detected in eight Durham precincts. Lorenzo Strigini reported in November 1989 on a read-ahead synchronization glitch and an operator pushing for speedier results, which together caused the computer program to declare the wrong winner in a city election in Rome, Italy (SEN 15, 1). Many of us have wondered how often errors or frauds have remained undetected.

1.3 Reduction of Risks in Elections

The U.S. Congress has the constitutional power to set mandatory standards for federal elections, but has not yet acted. Existing standards for designing, testing, certifying, and operating computer-based vote-counting systems are inadequate and voluntary, and provide few hard constraints, almost no accountability, and no independent expert evaluations. Vendors can hide behind a mask of secrecy with regard to their proprietary programs and practice, especially in the absence of controls. Poor software engineering is thus easy to hide. Local election officials are typically not sufficiently computer-literate to understand the risks. In many cases, the vendors run the elections. (See [7].)

Providing sufficient assurances for computer-based election integrity is an extremely difficult problem. Serious risks will always remain, and some elections will be compromised. The alternative of counting paper ballots by hand is not promising, and is not likely to be any less subvertible. But we must question more forcefully whether computer-based elections are really worth the risks. If they are, we must determine how to impose more meaningful constraints. Section 2 considers what might be done to avoid some of the problems discussed here.2

 

2 Techniques for Increasing Security

Security in computer systems and networks requires enormous attention throughout the requirements, design and implementation stages of development, and throughout potentially every aspect of system operation and use. Because it is a weak-link phenomenon, considerable effort must be invested in avoiding the vulnerabilities discussed in Chapter . Easy answers are almost never available in the large, although occasionally an isolated problem may have a simple solution in the small.

Appropriate techniques for increasing security include the structuring approaches noted in Section , especially those that apply the principle of separation of concerns (duties, privileges, etc.). They also include essentially all of the techniques of system and software engineering, as suggested by Table . But perhaps most important is an awareness throughout of the types of vulnerabilities that must be avoided, of which this book provides copious examples.

The following subsection addresses a portion of the general problem of increasing security by considering a particular application: computer-based voting. Although that application is not completely representative, it serves to provide specific focus. After that, the section concludes with general considerations and a discussion on authentication.

2.1 An Example: Computer-Based Voting Systems

We begin by reconsidering the problems of attaining reliability, security, and integrity in computer-based voting, summarized in Section 1.3 These problems are, in many respects, illustrative of the more general security and reliability problems illustrated in this book. Thus, we can derive considerable insight from taking a specific application such as voting and attempting to characterize approaches to avoiding or minimizing the problems.

2.1.1 Criteria for Computer-Based Voting

At present, there is no generally accepted standard set of criteria that voting systems are required to satisfy. Previous attempts to define criteria specifically for voting systems, such as [6, 9], the Federal Election Commission voluntary guidelines, and the New York City requirements,4 are inadequate, because they fail to encompass many of the possible risks that must ultimately be addressed. Indeed, existing security criteria such as the U.S. TCSEC (the Orange Book, TNI, TDI, etc.), the European ITSEC, the Canadian CTCPEC, and the draft U.S. Federal Criteria are also inadequate, for the same reason. Furthermore, essentially all existing voting systems would fail to satisfy even the simplest of the existing criteria. The risks lie not only in the inherent incompleteness of those criteria but also in the intrinsic unrealizability of such criteria. Nevertheless, such criteria can be very useful.

Generic criteria for voting systems are suggested here as follows.

2.1.2 Realizability of Criteria

No set of criteria can completely encompass all the possible risks. However, even if we ignore the incompleteness and imprecision of the suggested criteria, numerous intrinsic difficulties make such criteria unrealizable with any meaningful assurance.

2.1.3 System Trustworthiness

System trustworthiness depends on design, implementation, and operation.

Security vulnerabilities are ubiquitous in existing computer systems, and also inevitable in all voting systems -- including both dedicated and operating-system-based applications. Vulnerabilities are particularly likely in voting systems developed inexpensively enough to find widespread use. Evidently, no small kernel can be identified that mediates security concerns, and thus potentially the entire system must be trustworthy.

System operation is a serious source of vulnerabilities, with respect to integrity, availability, and, in some cases, confidentiality -- even if a system as delivered appears to be in an untampered form. A system can have its integrity compromised through malicious system operations -- for example, by the insertion of Trojan horses or trapdoors. The presence of a superuser mechanism presents many opportunities for subversion. Furthermore, Trojan horses and trapdoors are not necessarily static; they may appear for only brief instants of time, and remain totally invisible at other times. In addition, systems based on personal computers are subject to spoofing of the system bootload, which can result in the seemingly legitimate installation of bogus software. Even in the presence of cryptographic checksums, a gifted developer or subverter can install a flaw in the system implementation or in the system generation. Ken Thompson's Turing Lecture stealthy Trojan horse technique [10] illustrates that such perpetrations can be done without any modifications to source code.

System integrity can be enhanced by the use of locally nonmodifiable read-only and once-writable memories, particularly for system programs and preset configuration data, respectively.

Data confidentiality, integrity, and reliability can be subverted as a result of compromises of system integrity. Nonalterable (for example, once-writable) media may provide assistance in ensuring integrity, but not if the system itself is subvertible.

Voter anonymity can be achieved by masking the identity of each voter so that no reverse association can be made. However, such an approach makes accountability much more difficult. One-way hashing functions or even public-key encryption may be useful for providing later verification that a particular vote was actually recorded as cast, but no completely satisfactory scheme exists for guaranteeing voter anonymity, consistency of the votes tabulated with respect to those cast, and correct results. Any attempt to maintain a bidirectional on-line association between voter and votes cast is suspect because of the inability to protect such information in this environment.

Operator authentication that relies on sharable fixed passwords is too easily compromised, in a wide variety of ways such as those noted in Section . Some other type of authentication scheme is desirable, such as a biometric or token approach, although even those schemes themselves have recognized vulnerabilities.

System accountability can be subverted by embedded system code that operates below the accounting layers or by other low-layer trapdoors. Techniques for permitting accountability despite voter anonymity must be developed, although they must be considered inherently suspect. Read-only media can help to ensure nontamperability of the audit trail, but nonbypassability requires a trusted system for data collection. Accountability can be subverted by tampering with the underlying system, below the layer at which auditing takes place. (See also [3].)

System disclosability is important because proprietary voting systems are inherently suspect. However, system inspection is by itself inadequate to prevent stealthy Trojan horses, run-time system alterations, self-modifying code, data interpreted as code, other code or data subversions, and intentional or accidental discrepancies between documentation and code.

2.1.4 System Robustness

System robustness depends on many factors.

System availability can be enhanced by various techniques for increasing hardware-fault tolerance and system security. However, none of these techniques is guaranteed.

System reliability is aided by properly used modern software-engineering techniques, which can result in fewer bugs and greater assurance. Analysis techniques such as thorough testing and high-assurance methods can contribute. Nevertheless, some bugs are likely to remain.

Use of redundancy can in principle improve both reliability and security. It is tempting to believe that checks and balances can help to satisfy some of the criteria. However, we rapidly discover that the redundancy management itself introduces further complexity and further potential vulnerabilities. For example, triple-modular redundancy could be contemplated, providing three different systems and accepting the results if two out of three agree. However, a single program flaw or a Trojan horse can compromise all three systems. Similarly, if three separately programmed systems are used, it is still possible for common-fault-mode mistakes to be made (there is substantial evidence for the likelihood of that occurring) or for collusion to compromise two of the three versions. Furthermore, the systems may agree with one another in the presence of bogus data that compromises all of them. Thus, both reliability and security techniques must provide end-to-end protection, and must check on each other.

In general, Byzantine algorithms can be constructed that work adequately even in the presence of arbitrary component failures (for example, due to malice, accidental misuse, or hardware failure). However, such algorithms are expensive to design, implement, and administer, and introduce substantial new complexities. Even in the presence of algorithms that are tolerant of n failed components, collusion among n+1 components can subvert the system. However, those algorithms may be implemented using systems that have single points of vulnerability, which could permit compromises of the Byzantine algorithm to occur without n failures having occurred; indeed, one may be enough. Thus, complex systems designed to tolerate certain arbitrary threats may still be subvertible through exploitation of other vulnerabilities.

Interface usability is a secondary consideration in many fielded systems. Complicated operator interfaces are inherently risky, because they induce accidents and can mask hidden functionality. However, systems that are particularly user friendly could be even more amenable to subversion than those that are not.

Correctness is a mythical beast. In reliable systems, a probability of failure of 10-4 or 10-9 per hour may be required. However, such measures are too weak for voting systems. For example, a 1-bit error in memory might result in the loss or gain of 2k votes (for example, 1024 or 65,536). Ideally, numerical errors attributable to hardware and software must not be tolerated, although a few errors in reading cards may have to be acceptable within narrow ranges -- for example, because of hanging chaff. Efforts must be made to detect errors attributable to the hardware through fault-tolerance techniques or software consistency checks. Any detected but uncorrectable errors must be monitored, forcing a controlled rerun. However, a policy that permits any detected inconsistencies to invalidate election results would be dangerous, because it might encourage denial-of-service attacks by the expected losers. Note also that any software-implemented fault-tolerance technique is itself a possible source of subversion.

2.1.5 System Assurance

High-assurance systems demand discipline and professional maturity not previously found in commercial voting systems (and, indeed, not found in most commercial operating systems and application software). High-assurance systems typically cost considerably more than conventional systems in the short term, but have the potential for payoff in the long term. Unless the development team is exceedingly gifted, high-assurance efforts may be disappointing. As a consequence, there are almost no incentives for any assurance greater than the minimal assurance provided by lowest-common-denominator systems. Furthermore, even high-assurance systems can be compromised, via insertion of trapdoors and Trojan horses and operational misuse.

2.1.6 Conclusions on Realizability

The primary conclusion from this discussion of realizability is that certain criteria elements are inherently unsatisfiable because assurance cannot be attained at an acceptable cost. Systems could be designed that will be operationally less amenable to subversion. However, some of those will still have modes of compromise without any collusion. Indeed, the actions of a single person may be sufficient to subvert the process, particularly if preinstalled Trojan horses or operational subversion can be used. Thus, whereas it is possible to build better systems, it is also possible that those better systems can also be subverted. Consequently, there will always be questions about the use of computer systems in elections. In certain cases, sufficient collusion will be plausible, even if conspiracy theories do not hold.

There is a serious danger that the mere existence of generally accepted criteria coupled with claims that a system adheres to those criteria might give the naive observer the illusion that an election is nonsubvertible. Doubts will always remain that some of the criteria have not been satisfied with any realistic measure of assurance and that the criteria are incomplete.

Finally, a fundamental dilemma must be addressed. On one hand, computer systems can be designed and implemented with extensive checks and balances intended to make accidental mishaps and fraud less likely. As an example pursuing that principle, New York City is embarking on a modernization program that will attempt to separate the processes of voting, vote collection, and vote tallying from one another, with redundant checks on each. The New York City Election Project hopes to ensure that extensive collusion would be required to subvert an election, and that the risks of detection would be high. However, that effort permits centralized vote tallying, which has the potential for compromising the integrity of the earlier stages.

On the other hand, constraints on system development efforts and expectations of honesty and altruism on the part of system developers seem to be generally unrealistic; the expectations on the operational practice and human awareness required to administer such systems also may be unrealistic.

We must avoid lowest-common-denominator systems, and instead try to approach the difficult goal of realistic, cost-effective, reasonable-assurance, fail-safe, and nontamperable election systems.

Vendor-embedded Trojan horses and accidental vulnerabilities will remain as potential problems, for both distributed and centralized systems. The principle of separation is useful, but must be used consistently and wisely. The use of good software engineering practice and extensive regulation of system development and operation are essential. In the best of worlds, even if voting systems were produced with high assurance by persons of the highest integrity, the operational practice could still be compromisible, with or without collusion. Vigilance throughout the election process is simply not enough to counter accidental and malicious efforts that can subvert the process. Residual risks are inevitable.

Although this section considers election systems, many of the concepts presented here are relevant to secure systems in general.

 

2.2 Criteria for Authentication

The example of electronic voting machines is not entirely representative of the fully general security problem in several respects. For example, the "end-users" (the voters) normally do not have direct access to the underlying systems, and the system operators are generally completely trusted. Nevertheless, the example demonstrates the need for overcoming many of the vulnerabilities noted in Chapter .

One type of vulnerability at present specifically not within the purview of computer systems in the voting application is the general problem of authenticating users and systems. In this section, we consider techniques for user authentication in the broad context of distributed and networked open systems, such as all of those systems connected to the Internet or otherwise available by dial-up connections.

Section  considers risks inherent in reusable passwords. We consider here alternatives that considerably reduce those risks. We also characterize the overall system requirements for authentication, including the mechanisms and their embedding into systems and networks.

Various approaches and commercially available devices exist for one-time passwords, including smart-cards, randomized tokens, and challenge-response schemes. For example, a hand-held smart-card can generate a token that can be recognized by a computer system or authentication site, where the token is derived from a cryptographic function of the clock time and some initialization information, and where a personal identification number (PIN) is required to complete the authentication process. Some devices generate a visually displayed token that can be entered as a one-time password, while others provide direct electronic input. These devices typically use one-key symmetric cryptographic algorithms such as the Digital Encryption Standard (DES) or two-key asymmetric algorithms such as the RSA public-key algorithm (noted at the end of Section ), with public and private keys. Less high-tech approaches merely request a specified item in a preprinted codebook of passwords.

Following is a basic set of idealized requirements relating to secure authentication.  

These requirements must be satisfied by systems involved in authentication. They are applicable to one-time passwords. They are also generally applicable to biometric authentication techniques, such as matching of fingerprints, handwriting, voiceprints, retina patterns, and DNA. However, even if these requirements are completely satisfied, some risks can still remain; weak links in the computer systems or in personal behavior can undermine even the most elaborate authentication methods. In any event, we need much more stringent authentication than that provided by reusable passwords.6

 

References

 [1]
R. Dugger. Annals of democracy (voting by computer). New Yorker, November 7, 1988.
 [2]
L. Gong, T.M.A. Lomas, R.M. Needham, and J.H. Saltzer. Protecting poorly chosen secrets from guessing attacks. IEEE Journal of Selected Areas in Communications, 11(5):648-656, June 1993.
 [3]
G.L. Greenhalgh. Security and auditability of electronic vote tabulation systems: One vendor's perspective. In Proceedings of the Sixteenth National Computer Security Conference, pages 483-489, Baltimore, Maryland, 20-23 September 1993 1993. NIST/NCSC.
 [4]
R. Mercuri. Threats to suffrage security. In Proceedings of the Sixteenth National Computer Security Conference, pages 474-477, Baltimore, Maryland, 20-23 September 1993. NIST/NCSC.
 [5]
P.G. Neumann. Security criteria for electronic voting. In Proceedings of the Sixteenth National Computer Security Conference, pages 478-482, Baltimore, Maryland, 20-23 September 1993.
 [6]
R.G. Saltman. Accuracy, integrity, and security in computerized vote-tallying. Technical report, National Bureau of Standards (now NIST) special publication, Gaithersburg, Maryland, 1988.
 [7]
R.G. Saltman. Assuring accuracy, integrity and security in national elections: The role of the U.S. Congress. In Computers, Freedom and Privacy '93, pages 3.8-3.17, March 1993.
 [8]
R.G. Saltman. An integrity model is needed for computerized voting and similar systems. In Proceedings of the Sixteenth National Computer Security Conference, pages 471-473, Baltimore, Maryland, 20-23 September 1993.
 [9]
M. Shamos. Electronic voting: Evaluating the threat. In Computers, Freedom and Privacy '93, pages 3.18-3.25, March 1993.
 [10]
K. Thompson. Reflections on trusting trust. Communications of the ACM, 27(8):761-763, August 1984.

Footnotes

 
(1)
See John Stebbins, "Fischer Victory Upheld," St. Petersburg Tribune, April 6, 1993, Florida/Metro section.  
(2)
The Virginia, Durham, Rome, Yonkers, and Michigan cases were discussed in SEN, 15, 1, 10-13. The 1992 cases are documented in SEN, 18, 1, 15-19. For further background, see a report by Roy G. Saltman [6], and a subsequent paper [7]. Saltman also contributed an on-line piece on the role of standards, in SEN, 18, 1, 17. Four papers from the Proceedings of the 16th National Computer Security Conference are relevant, by Saltman [8], Rebecca Mercuri,  [4], Peter Neumann [5], and Gary L. Greenhalgh,  [3]. See also publications by two nongovernmental organizations, Computer Professionals for Social Responsibility (P.O. Box 717, Palo Alto, CA 94302, including a voluminous collection of study items) and Election Watch (a project spearheaded by Mae Churchill of the Urban Policy Research Institute, 530 Paseo Miramar, Pacific Palisades, CA 90272).  
(3)
Section 2.1 is derived from [5].  
(4)
Electronic Voting System, Request for Proposal, Appendix G, Security and Control Considerations, New York City Board of Elections, New York City Elections Project, September 1987.  
(5)
This statement can also be applied to persons employed by state lotteries.  
(6)
A client-server authentication scheme has been proposed that encrypts seemingly randomized data with a fixed password [2]. This scheme overcomes a potential weakness in Kerberos -- namely, that a key derived from a fixed password may be compromisible by off-line guessing. The complexity of this scheme is comparable to the use of cryptographic tokens, and may have advantages over certain token implementations.