Critique of "Analysis of an Electronic Voting System" document
by Rebecca Mercuri, July 24, 2003

Although I am a strong critic of self-auditing voting systems, the paper "Analysis of an Electronic Voting System" by Kohno, Stubblefield, Rubin and Wallach makes claims and conjectures that are inconsistent with existing election and programming practices.  Nevertheless, there may be important lessons to be learned by examining their analysis.

The paper begins by claiming that problems experienced in Florida 2000 "have led to increasingly widespread adoption of direct recording electronic (DRE) voting systems." In actual fact, the trend toward DREs had begun prior to the 2000 election, increasing to 8.9% of counties from 3.7% in 1992.  As well, lever machines had decreased from 25.4% in 1992 to 14.7% in 2000, and punch-cards from 23.5% to 19.2%.  Punch-cards were not as "widely used" as the authors claim -- in fact, in 2000 the most commonly used systems for casting ballots in the US were (and still are) mark-sense (optically scanned). [See A Better Ballot Box, R. Mercuri, IEEE Spectrum, October 2002.]  As well, a report by MIT/Caltech following the 2002 election in Florida indicated that fewer problems were found in those counties that had chosen to replace their systems with optically scanned ones rather than DREs.

Diebold was one of the manufacturers that obtained contracts from states choosing to replace their voting systems in 2002. The authors of the analysis neglected to mention that ALL voting systems procured prior to the end of 2002 had been certified in accordance with the FEC and NASED 1990 standards, which the FEC had already deemed inadequate.  It is not clear whether counties making such purchases were aware that the 2002 standards would not be applied to "new" systems until 2003.  Even if the analysis of the code is correct in assuming that there were security flaws with the supposed Diebold software, one must note that the lax standards that existed at that time were all that Diebold was expected to comply with, so there may not have been a violation of election laws.  This does not excuse poor security practices on the part of the vendor, but it also places blame on the part of the FEC and NASED who allowed products to be certified to admittedly obsolete standards.

With regard to Diebold Election Systems, prior to the 2002 fall primary and general elections, problems had already been noted in trial uses with their voting equipment, significant enough to cause four counties to commission a usability report from the University of Maryland.  [See Usability Review of the Diebold DRE by Bederson and Herrnson]  Although those UMD researchers were primarily interested in human factors, they felt it necessary to comment that one of the two voting machines they had tested did not function properly due to equipment failure.  It is understood that the manufacturer subsequently corrected some problems prior to election use, hence at least one "earlier version" of the code did legitimately exist, as Diebold has stated with regard to the code that was discovered on their website.

Since the entire process of voting system inspection and deployment has been shrouded in trade-secrecy, it is not possible to ascertain (without inside assistance) whether the code analyzed was actually, as the authors surmise, "used in Diebold's AccuVote-TS voting terminal."  They explain that although the code is designed to run on a DRE device, "one can run it on a regular Microsoft Windows computer."  Since the authors did not have access to the actual Diebold equipment, they do not truly know whether there is additional hardware and software that would provide safeguards against some of the security flaws they claim to have found.  As well, the researchers do not know whether any of the code they looked at had actually received certification for use in an election.  Comments found in the files do indicate that the work was in progress.  It is therefore wholly inappropriate for the authors of the analysis to conjecture (unless they are mind-readers or privy to additional information), as they did in Section 6.4, that the programmers did not intend "to go back and remedy all of these issues" or to infer that "one of the developers may have thought that improving the cryptography would be useful, but then got distracted with other business."

Many other comments in the analysis regarding the program code are similarly misleading.  The authors assert that C++ is an unsafe language and that Java and C# are safe ones.  This is untrue.  C++ is still the system development language of choice for many major programming efforts, and Java and C# are not sufficiently better to yet be deemed "safe."  It is certainly possible to create a flawed voting machine in Java, as evidenced by the assignment I had given to my first-semester students at Bryn Mawr College in which they were expected to create one (and they did).  Statements such as "most readers of the code would need to invest significant time to learn the meaning of the various names shown here" disallows for the possible existence of external data dictionary or function definition documents.  Although good programming practices dictate that one should comment their code as it is being developed, we all know that the vast majority of programmers in the real world write code first and document it later, especially when working under severe time constraints.  Other complaints in the analysis include the use of #if rather than #ifdef being potentially confusing to a conjectured "later programmer" and the suggestion that "prudent software engineering would recommend" that conditional compilation should replaced by configuration files -- these are a matter of programming style choices and are not necessarily, in themselves, indicative of such serious programming flaws as the authors later imply.

This misleading statement is found in section 6.3 of the analysis -- "due to the lack of comments, the legacy nature of the code, and the use of third-party code and operating systems...this increases the chances that bugs exist in the code, but it also implies that any of the coders could insert a malicious backdoor into the system."  According to current computer science theory, even WITH well-commented, non-legacy, original code and operating systems, bugs can and do exist and a malicious backdoor could be inserted or later exploited in ANY computer-based system.  Better coding may enhance a computer product but it does not in any way guarantee its security. [See Ken Thompson's 1984 Turing Award lecture Reflections on Trusting Trust] This is why independent auditability is ALWAYS necessary, such as could be provided for election systems by voter-verified paper ballots.  The FEC specifically allows COTS third-party products (such as the audio and operating system modules) to be incorporated into voting systems WITHOUT inspection.  This is a serious flaw in the new (2002) voting system standard, but it was and is admissible in Diebold products because the FEC has not yet changed their stance, despite this vulnerability having been pointed out to them by numerous security experts.  The authors' criticism of Diebold for a practice that the FEC and NASED has decided to condone is thus misdirected.

Some comments in the analysis also indicate a lack of understanding of election procedures, protocols, and laws.  For example, there is a lengthy discussion regarding the possibility of reverse engineering and creation of bogus voter or administrator smartcards in order to insert additional votes into the system. Although the scenario described is possible, it involves considerably more effort than would be necessary to just collude with poll workers and create a few extra bogus voters at the end of the day using the legitimate equipment.  This is no different from adding ballots to a ballot box or ringing up a few extra votes on a lever machine when nobody is looking.  In actual practice, if the vote totals are greater than the number of votes cast, an entire precinct's ballots can be required to be omitted from the final result totals.  This is why paper ballot boxes are transparent, so that "stuffing" can be more readily evident, and why voting systems in many states are required to have an externally visible "ballot number" that is checked through the day to ensure that it matches the number of voters who have signed the registry.  It is these procedural controls, applied by the bi-partisan poll workers and election officials, that should and do provide checks and balances to ensure the validity of the balloting system.  The inherent problem with DREs is not so much that their code is vulnerable, but that the entire process is invisible, making it impossible to ascertain that the ballots tabulated by the system are precisely and only those that have been put there by the voters. It does not matter whether there are 1,000 or 1,000,000 lines of well or poorly written code, or whether state-of-the-art encryption technology is used -- since DRE vote totals are not independently auditable, there can be no way to ensure their correctness.  Another problem with DREs is that an inherent or deliberate system flaw can affect an election outcome in a more broad way than was heretofore possible with manual technologies.

Because we may never know whether the alleged Diebold code was actually part of a certified voting system, the proffered analysis may be no more than just an academic exercise. If, on the other hand, such a poorly constructed voting system with numerous security flaws was certified, this should come as no surprise since Peter Neumann and I have been writing and testifying about this possibility for well over a decade.  The type of system described by the analysis certainly might have satisfied the 1990 voting system standards.  Sadly, the fact remains that there is not terribly much in the current legislation or standards to prevent similarly inappropriately designed systems from being purchased. Furthermore, there is absolutely no incentive for the vendors to improve their products, especially now when they are being handed billions of dollars for their wares by the Act that was supposed to Help America Vote, while the government has failed to ensure that accepted computer software and security standards equivalent to those currently applied by the FDA, the FAA, and the DoD are also mandated for our voting products.