1 Introduction
Electronic voting systems (e-voting) are modernizing electoral processes, improving voter access, and reducing expenses associated with traditional paper-based elections. However, ensuring the security, transparency, and privacy of the election remains a significant challenge. Protocols provide mechanisms and define properties to secure the election process, but the effectiveness of such measures depends directly on the trust model. For example, a protocol may ensure that each voter only votes once by using a central authority. However, this may not be enough to secure the protocol if the central authority can manipulate the results with fraudulent ballots (ballot stuffing) or block users from voting (vote suppression).
Verifiability plays a crucial role in managing trust within electronic voting systems, as it allows stakeholders to independently confirm the integrity of election results (Adida et al., 2009; Küsters et al., 2020), and mitigate voter distrust in the system (Duenas-Cid, 2024). End-to-end verifiability (E2E) helps voters to independently confirm that their votes were cast, recorded, and tallied accurately, fostering transparency and strengthening confidence in the election outcome.
Although it is a standard feature in voting protocols (Bougon et al., 2022; Crimmins et al., 2023; Kremer et al., 2010), recent studies have identified limitations in E2E-based verifiability. Crimmins et al. (2023) argue that eligibility verifiability should be included in E2E. Another limitation is that E2E guarantees the detection of misbehaviour but fails to provide evidence of the detection. Basin et al. (2020) proposed dispute resolution to solve the issue, adapting notions of accountability to e-voting. Disputes can arise in situations where voters claim a wrongdoing from an authority, who defends its honesty. Dispute resolution defines how the validity of a claim is decided, without the need for a trusted judge. Instead, the dispute needs to be solved with unambiguous evidence, and verifiable by any third party with only access to the public data.
However, little attention has been paid to dispute resolution and the types of disputes that can arise in e-voting. Furthermore, the timeliness of the verification and accountability happens, at best, during the tally, which serves as a post-mortem evaluation, rather than a proactive process.
This paper proposes DiReCT, a new protocol with low trust requirements for voting authorities, which improves the state of the art in dispute resolution. The design of DiReCT combines secret sharing and blind signatures to provide elegibility verifiability, privacy and a decentralized tally, evolving the SUVS protocol by Larriba and López (2022). We extended it to address disputes caused by the lack of accountability, preventing attacks from corrupt authorities and voters. DiReCT accounts for two disputes introduced by Basin et al. (2020) and considers two new disputes on vote suppression and denial of casting. DiReCT is a multiparty protocol, where casting and tallying are performed by candidates with a conflict of interest (Moran and Naor, 2010). We adapt the original definition of dispute resolution to a multiparty protocol, emphasizing individual accountability to prevent any of the authorities from acting surreptitiously. DiReCT also provides a resolution mechanism for denial of casting, allowing voters to detect and recover from the attack in due time.
The main contributions of the paper can be summarized as follows:
-
• The definition of two new types of disputes: DCert, a dispute related to vote suppression, where the voting authority blocks the voter during the authentication and ballot certification; and DCast, a dispute related to casting, where the voter is blocked from verifying if the casting has been successful.
-
• The design of DiReCT, a multi-party electronic voting protocol with improved dispute resolution, individual accountability and cast timeliness, which guarantees that voters possess the evidence to resolve disputes before the election’s end.
-
• A detailed threat model that considers covert adversaries, as defined by Aumann and Lindell (2010), where entities can misbehave as long as they are not detected. This improves previous results that consider semi-trusted authorities (Basin et al., 2020), assume Honest-but-curious entities (Larriba and López, 2022), or that require trusting the voting authority (Adida et al., 2009; Chaum et al., 2008).
-
• The security analysis of DiReCT according to the Universal Composability framework by Canetti (2001), proving that the protocol has recorded-as-cast, eligibility verifiability, and the resolution of both new disputes and tally related disputes such as a tally authority removing a ballot from the tally.
The paper is structured as follows: Section 2 introduces the cryptography background related to the protocol; Section 4 covers the voting model, its entities, the security assumptions and properties; Section 5 explains DiReCT and the election process; the security analysis of the protocol is proved in Section 6; the complexity and scalability of the protocol is discussed in Section 7 and, finally, Section 8 summarizes the contributions and future work.
2 Background
This section introduces the fundamental cryptographic primitives as they are used in the base protocol SUVS (2022), namely, Shamir (1979) secret-sharing scheme, and Chaum (1983) blind signatures.
2.1 Shamir Secret-Sharing Scheme
The ballot encoding and tally are based on Shamir (1979) $(j,d)$-secret sharing work. In this scheme a secret C is shared among j participants, but it can only be retrieved if $d+1$ participants collaborate. To share the secret, a polynomial $q(x)$ encodes the secret as its independent term, with randomly chosen coefficients ${a_{d}}\cdots {a_{1}}$. A set P of random points of $q(x)$ are used as pieces, distributed among the participants (Equation (1))
To recover C, $d+1$ points are used to interpolate the polynomial of modulo prime p, for example, with the Lagrange (1795) interpolation method, where ${l_{i}}$ is the Lagrange Basis Polynomial:
(1)
\[ \begin{aligned}{}& q(x)={a_{d}}{x^{d}}+\cdots +{a_{1}}x+C\hspace{2.5pt}\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}p,\\ {} & P=\big\langle \big({x_{1}},q({x_{1}})\big),\dots ,\big({x_{d+1}},q({x_{d+1}})\big)\big\rangle .\end{aligned}\]2.2 Blind Signatures
The protocol uses blind signatures as introduced by Chaum (1983). In the scheme, a provider applies a blind function to some data and sends it to the signer. Both the blind and the corresponding unblind functions are only known for the provider, and the blind output does not leak any information about the data contained. When the signer receives the blinded data, it signs it and sends it back to the data provider. The sign function needs a corresponding verify function, without the need of sharing the key used for signing. The provider can then apply the unblind function and obtain the signature of the original data, thanks to the commutative property of the signing and blinding methods.
The blind signature scheme provides untraceability: it is not possible to link signed data with the blind data it comes from. The three methods that implement blind signatures are Blind, Sign and Unblind, with $sk$, $pk$ the corresponding secret and public key of the signer. The methods, as implemented using RSA, are defined as follows:
(3)
\[ \operatorname{Blind}\big(\textit{data},{\textit{mask}^{pk}}\big)=\textit{data}\cdot {\textit{mask}^{pk}}\hspace{1em}\mathrm{mod}\hspace{2.5pt}n,\](4)
\[ \begin{array}{r@{\hskip0pt}l@{\hskip10pt}r}\displaystyle {\operatorname{Sign}_{sk}}\big(\textit{data}\cdot {\textit{mask}^{pk}}\big)& \displaystyle ={\operatorname{Sign}_{sk}}(\textit{data})\cdot {\operatorname{Sign}_{sk}}\big({\textit{mask}^{pk}}\big)& \displaystyle \hspace{2.5pt}\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}n,\\ {} & \displaystyle ={\operatorname{Sign}_{sk}}(\textit{data})\cdot \textit{mask}& \displaystyle \hspace{2.5pt}\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}n,\end{array}\](5)
\[ \begin{array}{r@{\hskip0pt}l@{\hskip10pt}r}\displaystyle \text{Unblind}\big({\operatorname{Sign}_{sk}}\big(\textit{data}\cdot {\textit{mask}^{pk}}\big)\big)& \displaystyle ={\operatorname{Sign}_{sk}}(\textit{data})\cdot \textit{mask}\cdot {\textit{mask}^{-1}}& \displaystyle \hspace{2.5pt}\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}n,\\ {} & \displaystyle ={\operatorname{Sign}_{sk}}(\textit{data})& \displaystyle \hspace{2.5pt}\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}n.\end{array}\]For simplicity, we explain the workflow of the protocol using RSA blind signatures, which generates blind signatures using its partial homomorphic properties under modular exponentiation (Hwang et al., 2003). However, any blind signature scheme such as Pointcheval and Sanders (2016) can be used in its place without affecting the workflow of the protocol, only modifying how the Blind, Unblind and Verify methods are implemented.
3 Related work
Electronic voting protocols can be divided taking into account the cryptographic technique they use to achieve anonymity and verifiability, with the most popular being mix-nets (1981), homomorphic encryption (1978), and blind signatures (1983).
Mix-nets, introduced by Chaum (1981), involve the shuffling and re-encryption of ballots to sever the connection between voters and their votes. In 2002, Jakobsson et al. (2002) introduced the RPC mix-net, a method designed to enhance the robustness of voting systems without requiring full correctness proofs. However, generating and verifying these proofs is computationally intensive. Although later work by Furukawa et al. (2010) improved the efficiency of this approach, the time required to produce proofs remains a significant limitation.
Homomorphic encryption is a technique used to preserve anonymity in voting systems, introduced by Rivest et al. (1978). Existing e-voting systems primarily adopt Paillier encryption (2008) and ElGamal decryption, as defined by Kiayias and Yung (2002), Lee and Kim (2003). Cramer et al. (1996, 1997) proposed a new cryptographic protocol for multi-authority environments, enabling the distribution of authority among multiple entities. However, because individual ballots are not decrypted, each ballot must be verified using zero-knowledge proofs. As a result, applying this approach to multi-choice voting systems is challenging due to the significant computational overhead. Nevertheless, it can be combined with other techniques to enhance functionality. For example, Sebé et al. (2010) demonstrates its integration with mix-nets to improve the efficiency of voting systems, and Yang et al. (2017, 2018) proposed an homomorphic based voting protocol for ranked choice.
Blind signatures (1983) are used in voting protocols during the voter registration phase, to authorize the ballot of the voter without revealing the content to the authority. The use of blind signatures was first introduced by Fujioka et al. (1993), and later, Okamoto (1998) proposed the first practical receipt-free voting scheme suitable for large-scale elections. However, blind signature schemes require the signer to be fully trusted. If the signer is compromised, an attacker could issue and cast arbitrary numbers of ballots, undermining the integrity of the election. Given the limitations, protocols have combined blind signatures with other techniques to limit the power of the authority: Li et al. (2009) proposed a verifiable process to identify different attacks on the protocol, and Nguyen Thi and Dang (2013) combined blind signatures with a dynamic ballot to let voters update their ballot without the authority intervention. However, it is the combination of blind signatures with threshold cryptographic techniques (Desmedt and Frankel, 1990) the one that more effectively limits the ballot stuffing possibility of the authority. Juang et al. (2002) introduced a scheme supporting distributed authorities by applying threshold methods. The work by Gong et al. (2019) uses threshold blind signatures to protect privacy of the voter and guarantee eligibility in a blockchain based implementation. More recently, Larriba et al. (2020), Larriba and López (2022) presented a lightweight threshold-based voting system that also achieves public verifiability. Other authors have proposed a similar decentralized solution using ring signatures (Chen et al., 2008; Tornos et al., 2014). Blind signatures have also been discussed in other voting protocols modalities, with a recent work by Willemson (2023) discussing the benefits of using blind signatures to achieve eligibility verifiability in postal voting.
Although Blockchain technology is not a cryptographic primitive, it has been proposed as a supporting technology in the implementation of voting protocols. Such protocols still combine the blockchain with other cryptographic primitives during the voting system. Yang et al. (2020) makes use of group signatures to protect the privacy of the voter, and homomorphic encryption to compute the final tally. Gao et al. (2019) makes use of ring signatures, and includes an audit mode at the end election, implemented with blockchain, to detect mismatches in the tally. Larriba et al. (2021) also uses ring-signatures to protect the privacy of voters, and implements a multi-party voting protocol to increase the trust in the system. For other protocols such as Adida (2008), Larriba and López (2022), although they do not mention blockchain in their design, they still require the use of a public bulletin board, which can be implemented as a blockchain.
Regarding security guarantees, few voting protocols go further than verifiability and offer dispute resolution or individual accountability, given the complexity of introducing properties while keeping the privacy and coercion resistance of the voter (Pankova and Willemson, 2022).
A principal work on dispute resolution was published by Basin et al. (2020). The authors introduce three types of disputes, and propose a generic protocol (MixNet) that addresses two of them. This is one of the few protocols that incorporate dispute resolution in its design, although it leaves unresolved the problem of eligibility. Another recent work including dispute resolution is Themis by Bougon et al. (2022), which also includes a formal verification of its properties. However, since Themis is an hybrid protocol, the dispute resolution requires the collaboration of a voting officer. Themis achieved partial accountability, which means that for some of the disputes it can detect the issue but not identify the specific adversary. A detailed comparison is included in the next section.
In this work, we extend the blind signature scheme SUVS due to its low computational overhead, eligibility verifiability, and flexibility. Inspired by recent work on dispute resolution, we extend the protocol and address disputes for cases not considered, such as when voters are blocked from participating in the election. In addition, the protocol improves the timeliness of the resolution of arising disputes. The protocol makes intensive use of the public bulletin board, which can be provided by blockchain implementations such as Gong et al. (2019).
3.1 Dispute Resolution
The concept of dispute is stated by Basin in 2020 as “when a voter claims that the voting authority is dishonest and did not correctly process his ballot while the authority claims to have followed the protocol”. Basin et al. distinguish between universal and individual verifiability to detect relevant disputes in electronic voting. Universal verifiability properties can be ensured only with access to public data (e.g. PBB). They claim that universal verifiable properties do not lead to disputes because any voter or third party with access to the PBB can audit the property. However, individual verifiability describes properties that only the voter can check, because she is the only one who knows which ballot has been cast, as well as some other private data. This characteristic leads to the occurrence of disputes. On the one hand, voters are able to detect problems with their ballots but may not be able to prove it to others. On the other hand, voters can accuse an honest authority of misbehaving, which may not be able to prove its honesty. Basin et al. (see Table 1) also identify three problems related to individual verifiability:
In this regard, the protocol Themis by Bougon et al. (2022) considers a casting dispute where the voter can verify the consideration of its physical ballot as long as the server or a set of observers are trustworthy.
-
1. The voter is blocked from casting the ballot. The reasons can be technical or social. This is connected with voter suppression and disenfranchisement in traditional elections. Despite defined, this dispute was left out of the scope by the authors.
-
2. The voter is prevented from verifying its ballot after casting.
-
3. The voter cannot check if the ballot was recorded or not. The authors describe two types of dispute: either the cast ballot was not recorded (D1); or an uncast ballot was recorded for the voter (D2).
Although related to accountability, dispute resolution is a stronger requirement, since it requires unambiguity. In other terms, dispute resolution eliminates the need for a trusted authority acting as a judge. Any honest third party would act as such using the protocol execution and the data in the PBB.
However, different protocols consider different notions of accountability. In Basin et al. (2020), the definition of dispute resolution only includes partial accountability: for a protocol with multiple parties, it is enough to detect that a party misbehaved, but not which one. In the case of Themis, they improve the accountability guarantees by trying to identify which exact party misbehaved. However, for some cases Themis is only able to narrow down the blame to a set of parties, achieving partial accountability. Our proposed protocol DiReCT, is designed to provide individual accountability, as shown in the security analysis in Section 6.
4 Voting Model
DiReCT’s voting model is structured around a secret sharing scheme and blind signatures (Sections 2.1 and 2.2), a design proposed and implemented in SUVS by Larriba and López (2022). In this voting model, the secret (ballot) is divided into shares and a commitment is created and blinded with a mask. The voting authority authenticates voters and signs the blinded commitments. The voters send the unblinded signed commitments to the tallying authorities. These authorities pool the shares during the tally to obtain the original votes.
4.1 Entities and Their Roles
The entities involved in the proposed voting protocol are: the voters; the voting authority (VA); the k parties; and, a Public Bulletin Board (PBB).
-
• The voters are the members of the census.
-
• The voting authority VA initially setups the protocol and is in charge of signing ballots of valid voters.
-
• Party ${p_{j}}$ is a candidate in the election. It also plays the role of a tallying authority.
-
• A PBB is used to communicate public information. The PBB behaves as an append-only bulletin board (Heather and Lundin, 2009), providing a consistent view of the information posted, without the possibility of deleting any post. The PBB does not require any special assumptions compared to its use in other protocols (Doan et al., 2025; Mosaheb et al., 2025; Cuvelier et al., 2013). For the sake of clarity, we assume the existence of topics in the PBB, in the form of PBB:topic. Every entry in the PBB includes the entity and the timestamp:
4.2 Security Assumptions
Based on the previous definition of entities and their roles, we compile here the security assumptions of the protocol. We compiled them as a list to make it clear and cite on the protocol if needed:
${\operatorname{Sign}_{v}}$ denotes a signature issued by v with its certificate. The census is not an active entity in the protocol and, as other voting protocols (Adida et al., 2009), its implementation is outside the scope of this paper.
Covert adversaries are a type of semi-trusted entities that can misbehave when they do it surreptitiously. In other words, we assume that a covert adversary can perform an attack unless it can be attributed to them with public undeniable evidence (individual accountability). Covert adversaries represent a lower trust assumption compared with trusted, or Honest but Curious (HbC) authorities. HbC entities represent a privacy threat to voters although they are trusted to follow the protocol.
Aumann and Lindell (2010) formalized covert adversaries as a type of adversary which “faithfully models the adversarial behaviour in many commercial, political, and social settings”. In terms of the adversary capability assumptions, covert adversaries are between an HbC, who follows the protocol and only targets the privacy, and a fully malicious adversary, with unlimited resources and no restraints.
The concept of conflict of interest in e-voting was introduced by Moran and Naor (2010). We use the extended definition of Zou et al. (2017), where parties are not assumed to be trustworthy, but that they do not collude with each other. In practice, it is enough if just one party doesn’t collude with the rest. In other words, the protocol works as expected as long as not all the parties in the election are colluding with each other.
By definition, covert adversaries are not obliged to respond or forward messages unless it leads to individual accountability. SA4 is the only exception during the protocol. We assume that, among all the candidates in the elections, there is at least one party interested in completing the process. Such party can still behave as a covert adversary in the rest of the protocol. This is still a lower assumption compared with other works, where authorities are trusted or always reply or forward messages even when they misbehave (Basin et al., 2020).
Its implementation is not part of the protocol design. We note that it can be implemented with anonymous networks such as Tor (2016) or STORK Consortium (2017). This is a lower assumption compared with mix-nets, which are needed to calculate the tally anonymously, meanwhile DiReCT only requires a channel that guarantees the anonymity of the transmission.
4.3 Security Properties and Definitions
This subsection describes a compact list of the desired properties of the protocol and common definitions.
Verifiability in e-voting has different approaches depending on the author. In this paper, we follow the strict definitions of verifiability from Moran and Naor (2010), Bougon et al. (2022), but divide it into individual, universal and elegibility verifiability, as defined by Kremer et al. (2010).
Individual verifiability is defined as the voters ability to check that their ballot is recorded as cast (2010) and that the ballot content did not change, known as cast as intended or CaI (2010).
The universal verifiability property ensures that anyone (voters and external auditors) can verify that the results of the elections align with the cast ballots (counted-as-cast) (Kremer et al., 2010; Moran and Naor, 2010; Bougon et al., 2022). Eligibility verifiability ensures that only ballots from registered voters are counted, with at most one ballot per voter (Kremer et al., 2010; Bougon et al., 2022).
Attacks performed by covert adversaries are also known as covert attacks. However, this term is also used to refer to attacks that cannot be detected. In our protocol, we use surreptitious attacks to distinguish attacks where the origin cannot be proved.
Vote suppression happens when the voting authority prevents some eligible voters from participating in the elections. Denial of Casting (DoC) is an equivalent attack where the voters ballot casting is blocked by an authority.
Timeliness, as defined by Basin et al. (2020), guarantees that a voter possess the evidence to resolve disputes no later than the election’s end. We increase the granularity of the timeliness definition by applying it to three specific steps in the protocol: ${T_{\text{Cert}}}$ for the VA to respond to a ballot certification request; ${T_{\text{Cast}}}$ for the parties to respond to a ballot casting; and, ${T_{\text{Tally}}}$ for the parties to make public their shares during the tally. After performing the certification, cast, or after the tally starts, the election has set a specific window of time the process can last. Setting those timeouts allows the voter to detect the issue and resolve the dispute. Time constraints, such as the end of the voting period or how long does the voter need to wait for a response, are implicit parameters in the protocol specification. Still, in protocols such as Bougon et al. (2022), it is expected that the voter needs to wait for a response and can raise an issue otherwise. Note that the protocol does not rely on the synchronization of different entities to ensure its security guarantees. Timeouts serve only as upper limits on the expected processing time for various requests and can be set generously to accommodate potential timing differences. For instance, if the processing time for a request typically ranges from less than a minute to 5 minutes, setting a timeout of 15 minutes ensures that a missing response is clearly detectable while also accounting for synchronization differences of several minutes.
5 Protocol
This section describes DiReCT and the steps involved in the three phases of the elections: the 5.1, the 5.2 and the 5.3.
5.1 Preparation Phase
The preparation phase is divided into: the identity provision, where voters get credentials to participate in the election; and the system setup, where the election parameters are decided.
5.1.1 Identity Provision
The starting point of the identity provision in DiReCT is the existence of a honest census that can provide voters with a certificate (SA1).
The census owns the list of valid voters, a public PK and secret key and sends PK to PBB:prepare. For every voter v in the list of valid voters, the census sends a certificate ${\text{C}\text{ERT}_{v}}$ signed with its secret key, and verifiable with the public key PK.
5.1.2 System Setup
The system setup involves the publication of keys and parameters needed for the secret sharing scheme, blind signatures and hashing. The relationship of parameters is displayed in Table 2.
The parameters associated with the secret sharing scheme (Section 2.1) are: the degree d of the polynomials; the prime p used in the modulo operations; and the maximum number of points the voter can generate l.
To setup the blind signatures (Section 2.2), the VA selects two prime numbers that are kept secret. The product of those primes, n, is published, and it is used in the modular arithmetic of the blind signature messages. The VA generates a secret key s and public component $va$ such as $va\cdot s\equiv 1\hspace{2.5pt}(\mathrm{mod}\hspace{2.5pt}\phi (n))$.
The hash function $\textit{Hash}$ and a freshness value used in the election fresh are to be decided before the election. The freshness value prevents reusing messages from other elections or precomputation attacks (Oechslin, 2003). It should be a value not used previously and unknown before the elections start. Both fresh and $\textit{Hash}$ are public.
The protocol makes use of the time variables ${T_{\text{Cert}}}$, ${T_{\text{Cast}}}$ and ${T_{\text{Tally}}}$. They are timeouts for the voter to get the confirmation of the ballot certification (${T_{\text{Cert}}}$), ballot casting (${T_{\text{Cast}}}$), and the time the parties have to publish the shares at the beginning of the tally (${T_{\text{Tally}}}$). As explained in Section 4.3, they are not needed in elections with honest authorities, but, in the presence of corrupt authorities, they help determine if a request has been blocked. The variables should be agreed between the VA and the parties, and be public before the election starts.
Table 2
Setup Parameters. Public parameters are shared in the PBB with the topic prepare. Private parameters are kept secret by the entities that create them.
| Type | Public | Private |
| Census | Public key PK, used to validate certificates | Secret key, used to sign certificates |
| Secret sharing | Degree d, prime p and max number of points l | |
| Blind signature | Modulus n, public component $va$ | Secret key s, and two prime factors of n |
| Hashing | $\textit{Hash}$ function, fresh value | |
| Waiting time | ${T_{\text{Cert}}}$, ${T_{\text{Cast}}}$ and ${T_{\text{Tally}}}$ |
5.2 Voting Phase
In the voting phase voters participate in the election. It starts with the voter crafting the ballot, encoding their vote direction and producing the ballot shares. It follows the ballot certification, where the VA blindly signs the ballots and shares from valid voters in the census. The phase finishes with the casting, where the voters send their shares to the parties to be counted during the tally. Figure 1 depicts a simplified version of the process.
5.2.1 Ballot Crafting
The voter starts by encoding the vote direction as an integer C. The encoding is left as an implementation detail, but it can accommodate different types of voting. The encoded vote direction C is used as the independent term of a d-degree polynomial $q(x)$, as shown in Equation (1). According to the secret sharing scheme explained in Section 2.1, the voter can sample a set of points P from the polynomial (1), which can be used to form the ballot. Any single point from the set does not provide any information about the vote direction, only $d+1$ points allow to interpolate $q(x)$ and obtain C. Each one of the k parties in the election receives a subset (SP) of P.1
Every time the shares are collected into P they are sorted to produce a consistent output. The sorting operation is omitted in the notation for the sake of clarity.
To craft the ballot commitment, the ballot is digested using the agreed hash function $\textit{Hash}$ and the freshness value fresh (Equation (7)).
With the addition of the fresh value to the vote commitment, an adversary cannot reuse commitments from previous instances of the protocol. The commitment of the share ${\text{H-SP}_{i}}$ allows to link each share with the ballot, which will play a role in the casting verification.
(7)
\[ \begin{aligned}{}& \text{H-P}:=\operatorname{Hash}(P\hspace{0.1667em}|\hspace{0.1667em}\textit{fresh}),\\ {} & {\text{H-SP}_{i}}:=\operatorname{Hash}({\text{SP}_{i}}\hspace{0.1667em}|\hspace{0.1667em}\text{H-P}).\end{aligned}\]5.2.2 Ballot Certification
During the ballot certification, the voter certifies the commitments with the VA. This guarantees that a voter can only vote once. The identity of the voter is used in this step. To prevent linking the identity with the vote direction, the certification, blind signatures are used as described in Section 2.2.
First, the voter blinds the commitments using the VA public component $va$ (Equation (8)).
Each mask needs to be invertible modulo n to unblind the signature and different from each other. The voter uses the credentials obtained in the identity provision (Section 5.1.1) to sign BBallot. This step ensures that every voter can only obtain one ballot (unicity). Only voters with a valid certificate from the census can vote (democracy), and prevents the VA from creating extra ballots (ballot stuffing), since the VA does not possess a voter certificate.
(8)
\[ \begin{aligned}{}& \operatorname{Blind}(\text{H-P}):=\operatorname{Blind}\big(\text{H-P},{\textit{mask}^{va}}\big),\\ {} & \operatorname{Blind}{(\text{H-SP})_{i}}:=\operatorname{Blind}\big({\text{H-SP}_{i}},{\textit{mask}_{i}^{va}}\big),\\ {} & \textit{BBallot}:=\big(\operatorname{Blind}(\text{H-P}),{\big[\operatorname{Blind}({\text{H-SP}_{i}})\big]_{i=1}^{k}}\big).\end{aligned}\]Then, the voter v creates a secure communication with the PBB:certify to pose a request for certification (Equation (9)):
The VA retrieves the requests from PBB:certify and processes them: verifies that the certificate ${\text{C}\text{ERT}_{v}}$ is valid and that the voter v has not requested a ballot before. If the checks are satisfied, the VA signs the concealed ballot ${\operatorname{Sign}_{\text{VA}}}(\textit{BBallot})$. The response is sent to PBB:certified:
The reason for publishing this information is three-fold: the voter can confirm that the ballot has been received as intended; it prevents the VA from covertly blocking a voter by not sending the response to the certification; and every party can check that only voters with valid credentials are certified by the VA.
The voter can easily retrieve the signed data from PBB:certified by unblinding it (Equation (11)), using the original masks. The process ends with the voter obtaining the signed ballot SBallot (Equation (12)). Hence, the certification of the commitments is performed without revealing the vote.
5.2.3 Ballot Casting
To cast her vote, the voter v sends to each party the signed ballot, a share SP and the share commitment at time ${T_{2}}$, as described in Equation (13).
The party j receiving the share verifies both the signed commitment of the share and the signed ballot (Equation (14)).
If the checks are successful, the party publishes a receipt in the PBB:cast. The receipt is the content of the cast message without the share. With the receipt the voter can verify that the party received the correct information.
During the tally, any entity can detect if a party sends a different ${\text{SP}_{i}}$ than the one signed by checking the information in the PBB:cast.
(14)
\[ \begin{aligned}{}{\text{Verify}_{\text{VA}}}({\operatorname{Sign}_{\text{VA}}}\left(\text{H-P}\right))& \stackrel{?}{=}\hspace{2.5pt}\text{H-P},\\ {} \big[{\text{Verify}_{\text{VA}}}\big({\operatorname{Sign}_{\text{VA}}}({\text{H-SP}_{i}})\big)& \stackrel{?}{=}\hspace{2.5pt}{\text{H-SP}_{i}}\big]{_{i=1}^{k}},\\ {} \operatorname{Hash}({\text{SP}_{j}}|\text{H-P})& \stackrel{?}{=}\hspace{2.5pt}{\text{H-SP}_{j}}.\end{aligned}\](15)
\[ \text{Casting}({\text{party}_{j}}\to \text{PBB}):={\text{party}_{j}},\langle {\text{H-SP}_{j}},\textit{SBallot}\rangle ,{T_{3}}.\]The distribution of the shares between the candidates follows Shamir’s secret sharing scheme (Section 2.1), where at least d points are needed to interpolate the d degree polynomial. These are the possible situations where the shares can be recollected:
-
• During the tally, when all the parties post the share and ballot in the PBB.
-
• If all parties collude and exchange their shares before the tally. This is not possible in our model, since the parties have conflicting interests (SA3).
-
• If the voter sends all the shares to a party. Although such vote can be detected and avoided, doing so colludes with the recovery mechanism against a denial of casting. We study the situation in Section 6.3.
5.3 Tally Phase
Once the voting finishes, the tally phase starts, with the parties sending the shares to the PBB:tally:
To reconstruct the ballot, first the parties need to collect the shares with an identical $\textit{SBallot}$. With all the shares, each party can sort them and reconstruct the original P, as seen in Equation (6). With P it is straightforward to obtain the original polynomial $q(x)$, for example using Lagrange’s polynomials (see Equation (2)). The direction of vote can then be obtained from $q(x)$ by obtaining the independent term $C=q(0)\hspace{2.5pt}\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}p$.
6 Security Analysis
The analysis of the protocol has been carried out according to the Universal Composability Framework (UC) (Canetti, 2001), which compares the real world behaviour of the protocol with an ideal one to prove the validity of certain statements. By simulation, the actions in the real world of an adversary are applied. The security is held if the ideal and real behaviour are indistinguishable, which implies that the adversary actions do not affect the functionality being analysed. When studying the security of a functionality, the initial state needs to be defined, which allows breaking down the protocol into components. Similarly, it is possible to use a dummy party that acts as a placeholder for an ideal functionality, modelling the case of parties that are not corrupted by the adversary. This case which combines the real world and the adversary with calls to an ideal functionality is known as an hybrid protocol.
Following the UC framework, this section defines the ideal functionalities that model honest entities and properties, where the covert adversary behaviour is included in the form of attacks. Theorems and proofs are included to specify how properties are achieved, how attacks are not possible in DiReCT, or how the covert adversary can be identified if it does the attack (individual accountability). The disputes are characterized as the combination of two attacks, one when the authority misbehaves, and another, where a voter makes a false claim against an honest authority. A subsection covers each of the phases in the protocol, grouping the ideal functionalities, attacks, disputes and proofs.
6.1 Disputes in DiReCT
Dispute resolution (Section 3.1) is a key contribution of this paper. Introduced by Basin et al. (2020), in their work the authors identify two disputes regarding individual verifiability: claims for unrecorded ballots (D1); and claims for uncast recorded ballots (D2).
We adapt these definitions to DiReCT without affecting their meaning. D1 is included as DTally, although in DiReCT the recorded ballots can be universally verified (recorded as cast). D2 does not apply as is to DiReCT, instead we consider a more general attack where an authority falsifies ballots (ballot stuffing), and prove that the dispute is not possible in DiReCT. Furthermore, this paper introduces two new disputes. The first one, DCert, is a dispute related to vote suppression, where the voter is blocked from the authentication and ballot certification. The second one, DCast, is a dispute related to denial of casting (DoC), where the voter is blocked from verifying the ballot casting.
6.2 Ballot Certification
The behaviour of a trusted VA during the ballot certification is modelled by the ideal functionality ${\mathcal{F}_{\textit{Cert}}}$. ${\mathcal{F}_{\textit{CertObs}}}$ ensures that any voter with valid credentials obtains a signed ballot within a given time ${T_{\text{Cert}}}$. It uses the public bulletin board PBB to log public message exchanges. The adversary can read the information but cannot block access to it, and only can post by corrupting an entity with access. The PBB provides integrity and non-repudiation: the content posted cannot be altered by the adversary and it’s origin is unambiguous. Posted messages include the entity, the content and a timestamp (Section 4.1).
The ideal functionality or honest VA only knows that BBallot is a list of $k+1$ bit strings, since it has been blinded. In the case of an honest voter, BBallot corresponds to the ballot and shares commitments (Equation (8)).
We define a second ideal functionality ${\mathcal{F}_{\textit{CertObs}}}$, which models the universal verification of the Ballot Certification within a time frame ${T_{\text{Cert}}}$ (by any external auditor, party or voter). ${\mathcal{F}_{\textit{CertObs}}}$ outputs a 1 if the certification is performed on time, 0 otherwise.
Vote Suppression Dispute (DCert)
The DCert vote suppression dispute is linked with two attacks: a malicious VA blocking honest voters (vote suppression); and a malicious voter claiming a vote suppression from an honest VA (false suppression claim).
In the vote suppression attack, the real world adversary $\mathcal{M}$ corrupts the VA. Since voters reveal their identity to the VA, the adversary can selectively block honest voters from ballot certification.
To accomplish the dispute resolution in case of vote suppression, any third party should be able to detect the attack. Framing it in terms of a covert adversary, it is not possible for the adversary to perform the vote suppression surreptitiously. This is analogous to prove that in the hybrid protocol where the vote suppression is performed, the adversary cannot modify the behaviour of the ideal functionality ${\mathcal{F}_{\textit{CertObs}}}$, which always outputs a 0.
Proof.
A valid certification request is suppressed if ${\mathcal{F}_{\textit{CertObs}}}$ outputs 0. Note that ${\mathcal{F}_{\textit{CertObs}}}$ only aborts at step (a). However, since CerRq is valid, ${\mathcal{F}_{\textit{CertObs}}}$ does not abort, no matter what action S performs.
To succeed the covert attack, the adversary needs to perform the vote suppression with an output from ${\mathcal{F}_{\textit{CertObs}}}$ of 1. The options for S are:
Consequently, the adversary cannot avoid ${\mathcal{F}_{\textit{CertObs}}}$ outputting 0 at time $T={T_{0}}+{T_{\text{Cert}}}$. □
-
• If S omits the answer (i), after some time ${T_{\text{Cert}}}$, ${\mathcal{F}_{\textit{CertObs}}}$ outputs a 0 (step (d)).
-
• If S signs a different content (ii), in step (b) ${\mathcal{F}_{\textit{CertObs}}}$ detects the mismatch of the signed content ${\text{Verify}_{\text{VA}}}({\operatorname{Sign}_{\text{VA}}}(X))\ne \textit{BBallot}$. After ${T_{\text{Cert}}}$, ${\mathcal{F}_{\textit{CertObs}}}$ outputs a 0 (step (d)).
-
• If S sends a different signature (iii), similarly to the last case, ${\mathcal{F}_{\textit{CertObs}}}$ detects the mismatch ${\text{Verify}_{\text{VA}}}({\operatorname{Sign}_{{\text{VA}^{\prime }}}}(\textit{BBallot}))\ne \textit{BBallot}$ and outputs a 0.
The ideal functionality ${\mathcal{F}_{\textit{CertObs}}}$ can be translated to the real world behaviour by any third party with access to the PBB. This is what allows us to translate the hybrid protocol proof from Theorem 1 to the real world.
A voter corrupted by $\mathcal{M}$ can pose a false suppression claim at an honest VA.
In order ${\mathcal{F}_{\textit{CertObs}}}$ to output 0, as universally verifiable evidence of the suppression of the ballot certification request. Theorem 2 proves how this false claim is not possible.
Proof.
A false suppression claim succeeds if ${\mathcal{F}_{\textit{CertObs}}}$ outputs 0. Lets review each of S actions in the false suppression claim hybrid protocol and the output of ${\mathcal{F}_{\textit{CertObs}}}$:
Thus the adversary cannot change the behaviour in the ${\mathcal{F}_{\textit{CertObs}}}$ with its actions. □
-
• Step i: CertReq is the input of ${\mathcal{F}_{\textit{CertObs}}}$. Without it, ${\mathcal{F}_{\textit{CertObs}}}$ cannot output 0.
-
• Steps ii and iii: Both cases are checked in step (a), which results in abort.
-
• Steps iv: ${\mathcal{F}_{\textit{CertObs}}}$ verifies if the voter has received a previous CertRes, in which case it aborts (step (a)).
6.3 Ballot Cast
The ideal functionality ${\mathcal{F}_{\textit{Cast}}}$ models an honest party in the casting phase. Validates a ballot share of a voter and posts a receipt in a public log within time ${T_{\text{Cast}}}$.
It is assumed that ${T_{\text{Cast}}}$ is chosen based on the infrastructure and election’s size. Also, in the worst case scenario, the posting will happen right before the tally starts.
We define a second ideal functionality ${\mathcal{F}_{\textit{CastObs}}}$, which models how an honest voter can verify that the casting has been performed within a time frame ${T_{\text{Cast}}}$. ${\mathcal{F}_{\textit{CastObs}}}$ outputs 1 if the casting is performed on time, 0 otherwise.
Cast verification dispute (DCast). The second dispute we consider affects the voter’s verification of her ballot casting (DCast). Given that the communication between the voter and the parties is not public, the adversary may attempt an attack without being identified (covert adversary). Two attacks can cause the dispute: a malicious party blocking some honest voters (Denial of Cast or DoC); and a malicious voter claiming being denied by an honest party (false DoC claim).
The DoC hybrid protocol summarizes how the adversary can avoid confirming the casting to the user. Theorem 3 proves that DoC is not possible in DiReCT.
Theorem 3.
A corrupt party cannot block the ballot casting of a voter without the voter noticing it.
Proof.
For any valid voter’s ballot casting ${\mathcal{F}_{\textit{CastObs}}}$ outputs 0. First, lets analyse if ${\mathcal{F}_{\textit{CastObs}}}$ could abort. The ideal functionality only aborts in step 1. if the casting message is invalid, but the premise of the attack is that the adversary is blocking a valid message, thus ${\mathcal{F}_{\textit{CastObs}}}$ does not abort.
The simulator can then perform two actions. If S does not publish the receipt, ${\mathcal{F}_{\textit{CastObs}}}$ will not succeed finding it and will output a ${T_{\text{Cast}}}$ waiting (step 4.). Similarly, if S publishes a receipt with the wrong share commitment, ${\mathcal{F}_{\textit{CastObs}}}$ will ignore it and output 0.
The simulator cannot perform the denial of casting without the voter noticing after a time ${T_{\text{Cast}}}$, since it cannot modify the behaviour of ${\mathcal{F}_{\textit{CastObs}}}$. □
We prove in Theorem 4 that every user can verify her casting before the tally.
Proof.
In the protocol, ${T_{\text{Cast}}}$ is the reasonable maximum time to receive the casting confirmation. If the voter cast her ballot to a party j within time, according to ${\mathcal{F}_{\textit{CastObs}}}$, either j posts the receipt of the casting or the voter can detect the verification blocking waiting ${T_{\text{Cast}}}$. In both cases, the voter can verify if the casting is correct or not before the tally. □
The recovery process ${\mathcal{F}_{\textit{Rec}}}$ is defined based on the result of Theorem 4, and proved in Theorem 5.
Proof.
Theorem 4 proves that the voter can verify the ballot casting before the tally phase. Furthermore, ${\mathcal{F}_{\textit{Rec}}}$ models an honest voter during the casting phase. We define an hybrid model where ${\mathcal{F}_{\textit{Rec}}}$ interacts with parties corrupted by the adversary. According to ${\mathcal{F}_{\textit{Rec}}}$, every time the voter tries to cast with a corrupted party, ${\mathcal{F}_{\textit{CastObs}}}$ outputs a 0 and ${\mathcal{F}_{\textit{Rec}}}$ tries with a different authority. In a proof by contradiction, let us assume that the adversary has prevented the voter from performing the recovery. This means that the adversary has been able to modify the behaviour of the ideal functionality ${\mathcal{F}_{\textit{Rec}}}$ to output a 0. By its definition, this only happens if all the casting authorities are corrupted and performing the DoC. However, this contradicts assumption SA4 where at least one party participates in the casting. □
Recovery depends on at least one party participating in the casting SA4. This is included as a security assumption for the sake of completeness. However, it is reasonable to assume that among all the candidates of the elections, there is at least a party that is willing to complete the process. Note that the party can still behave as a covert adversary. In addition, DoC is performed without knowing the identity of the voter or the content of the ballot. Thus there is little incentive for all the parties to perform the DoC other than to disrupt the elections.
Note that the recovery mechanism, where voters may send their shares again to a different party, does not imply any linkability risk. The transmission is private (SA7), and the content received by the party cannot be linked to the voter, independently of the amount of shares a party receives.
The dispute resolution of DCast relies on honest voters performing the recovery. In other words, in case of blocking, the voter does not have evidence to prove the block to a third party. This is the reason why DiReCT dismisses all claims in the denial of casting. The dispute resolution is achieved by combining this dismissal with the fact that voters can always detect and recover from the blocking (Theorem 5).
6.4 Tally
We define two ideal functionalities related to the tally. ${\mathcal{F}_{\textit{TallyObs}}}$ represents the record-as-cast universal verifiability. Given a ballot that has been correctly cast with the corresponding public receipt (${\mathcal{F}_{\textit{CastObs}}}({\text{Casting}_{j}})$ with output 1), any entity with only access to the log can verify the tally. The function aborts if not enough receipts were issued to reconstruct the vote. It outputs $(0,\text{NotSharing},[{\text{party}_{x}}])$ with the parties that did not publish a share that had a cast receipt, or (0, ShareNotFromReceipt, $[{\text{party}_{x}}]$) if the share was not correct. (0, ShareNotFromP) if the shares pass all the checks but they do not account for the ballot, and $(1,{\textit{SBallot}_{j}},{\textit{vote}_{j}})$ if the tally is successful and it is possible to retrieve the vote.
${\mathcal{F}_{\textit{Tally}}}$ models the behaviour of an honest party during the tally. Only shares with the corresponding receipts (${\mathcal{F}_{\textit{Tally}}}$ output 1) are considered. It considers ${\mathcal{F}_{\textit{TallyObs}}}$ and outputs 1 when the tally is correct, 0 if there is an intentional problem, or abort if not enough shares have been cast.
Recorded as Cast (DTally)
The DTally dispute arises when a ballot is not recorded. Theorem 5 proved that the user can recover from cast-blocking. This subsection is built on top of that result, leaving to the adversary the only option to block the recording of the ballot during the tally. The dispute resolution is composed by two attacks: the interception of the ballot by the adversary (blind and selective tally interception); and a corrupt voter false claim (false tally interception claim).
In the random tally interception the adversary’s goal is to randomly block some ballots from being counted with plausible deniability.
In the selective tally interception, the adversary’s goal is to learn the vote direction of a share in order to block it, instead of blindly blocking some ballots from the tally.
We prove that DiReCT is robust against blind (Theorem 6) and selective (Theorem 7) tally interception.
Proof.
The adversary $\mathcal{M}$ can perform the blind tally interception through two actions:
In both cases, ${\mathcal{F}_{\textit{TallyObs}}}$ can detect the attack and which party performed the interception (individual accountability). □
-
• $\mathcal{M}$ does not send its share: any party or external entity can execute ${\mathcal{F}_{\textit{TallyObs}}}$, which only requires access to the public log and the receipt from the casting. The receipt is in the public log as defined in the attack (${\mathcal{F}_{\textit{CastObs}}}({\text{Casting}_{j}})=1$). The result of ${\mathcal{F}_{\textit{TallyObs}}}$ is $(0,\text{NotSharing},[{\text{party}_{x}}])$, with $\mathcal{M}\in [{\text{party}_{x}}]$.
-
• $\mathcal{M}$ sends a different share: similarly, any entity can call ${\mathcal{F}_{\textit{TallyObs}}}$, which outputs (0, ShareNotFromReceipt, $[{\text{party}_{x}}]$).
Proof.
The selective tally interception is an extended version of the blind tally interception. Due to UC, if DiReCT offers individual accountability for blind tally interception, it also offers it to any attack that uses it. □
By combining the previous results, Corollary 8 demonstrates the record-as-cast property in the protocol.
Proof.
Record-as-cast states that any ballot correctly cast ends up in the final tally. The definition of ${\mathcal{F}_{\textit{Tally}}}$ summarizes the process, if a ballot has enough receipts, either the parties exchange the shares and perform the tally or the party not collaborating is identified (Theorems 6 and 7). Under a covert adversaries threat model (SA2), all cast ballots are then recorded successfully. No other attack is possible since all that is needed for the final tally is access to the PBB. □
The following ideal functionality ${\mathcal{F}_{\textit{Vote}}}$ models an honest voter performing all the steps to vote in DiReCT. It interacts with the public log, k casting authorities ${[{A_{\textit{i}}}]_{i=1}^{k}}$, ${\mathcal{F}_{\textit{CertObs}}}$ and ${\mathcal{F}_{\textit{Rec}}}$. The definition ${\mathcal{F}_{\textit{Vote}}}$ attacks on the results of the elections.is needed to define in simpler terms
To perform a false claim, the adversary still needs to cast a message that will pass ${\mathcal{F}_{\textit{CastObs}}}$. For example, if the adversary tries to send a fake share that does not match the commitment, the parties will not publish the receipt and the share will be discarded from the tally.
To complete the resolution of the DTally dispute, Theorem 9 demonstrates that the false interception claim cannot be performed:
Theorem 9.
A ${\textit{party}_{j}}$ being subject to a false tally interception claim has universally verifiable evidence of its honesty.
Proof.
In the attack, the adversary calls ${\mathcal{F}_{\textit{Vote}}}$ with output 1. Thus, ${\text{party}_{j}}$ can call ${\mathcal{F}_{\textit{Tally}}}$ with the adversary fake material. The output will be (0, ShareNotFromP), which means that all the material submitted matches the signed commitments, but the shares do not match P. After all parties have shared their votes, any third party can perform the same steps as ${\mathcal{F}_{\textit{Tally}}}$ to arrive to the same conclusion, which is universal verifiable evidence of its honesty. □
6.5 Election Security
${\mathcal{F}_{\textit{Eleg}}}$ models the universal verification of the eligibility in the elections, which can be performed after the tally by any external auditor, party or voter. ${\mathcal{F}_{\textit{Eleg}}}$ outputs 1 when only members of the census voted, 0 otherwise.
The ballot stuffing hybrid protocol describes how an adversary may attempt to generate ballots to alter the elections, and Theorem 10 proves how the covert adversary is identified in DiReCT.
Theorem 10.
An adversary cannot perform ballot stuffing surreptitiously. If the adversary performs the attack, it is detected and attributed correctly.
Proof.
${\mathcal{F}_{\textit{Eleg}}}$ detects a mismatch between the number of ballot certification responses and the number of votes. To perform the attack without being detected, the adversary needs to modify the behaviour of ${\mathcal{F}_{\textit{Eleg}}}$ and obtain an output of 1.
${\mathcal{F}_{\textit{Eleg}}}$ compares the number of CertRes with votes in the tally. By the assumption SA5, all the voters that start the certification complete the cast process. In addition, Corollary 8 proved that all cast ballots are recorded (record-as-cast), thus ${\mathcal{F}_{\textit{Eleg}}}$ outputs 1 under normal conditions. The adversary cannot corrupt the census SA2 or obtain extra certificates from the census. The adversary cannot fake CertRes.
If the adversary performs ballot stuffing , then there will be more ballots counted than ballot certified: ${[{\text{CertRes}_{i}}]_{i=1}^{x}},{[{\text{vote}_{j}}]_{j=1}^{y}}\longrightarrow x\ne y$.
The adversary can only perform the ballot stuffing unnoticed if it blocks ballots from honest voters. However, as it has been proved, the adversary cannot block ballots during the casting (Theorem 5), during the tally (Corollary 8), neither can manipulate CertRes (Theorem 3).
Since the only entity able to sign the ballot is the VA, any mismatch on the tally can be individually accounted to a corrupt VA. □
Proof.
${\mathcal{F}_{\textit{Eleg}}}$ explains how to universally verify that only voters in the census voted, that they only voted once, and that the amount of ballot certification responses matches the amount of votes. Theorem 10 proves that ballot stuffing cannot be performed surreptitiously. With the assumption that $\mathcal{M}$ is a covert adversary (SA2) the attack cannot be performed. This implies that eligibility verifiability is achieved in DiReCT. □
7 Complexity and Scalability Analysis
We analyse the scalability of the protocol in two parts. First, we conduct a complexity analysis, measuring the number of messages sent by each voter as well as the total number of messages exchanged, and compare these figures with other voting protocols. Second, we evaluate the storage requirements in the PBB, using some example values for reference.
The voter performs carries out the following steps: she sends a certification request to the PBB (Equation (9)); she fetches the certification result from the PBB (Equation (10)); she casts (distributes) the shares to k tally authorities (Equation (13)); and retrieves the casting receipts from the PBB (Equation (15)). These steps result in a total of $3+k$ messages per voter.
The overall number of messages in the election consists of the messages exchanged by voters and the voting authority during ballot casting, as well as those exchanged among the tally authorities. During ballot casting, the system must handle $v(k+8)$ messages, where v is the number of voters. During the tally, $3k$ additional messages are required: first to send the collected shares; and, then, to post the final count results. In total, the election involves $v(k+8)+3k$ messages.
As a comparison, Table 3 presents a summary of voter’s messages and total amount of messages of different voting protocols. Note that differences in the amount of messages account for the different entities and primitives involved in each protocol. For example, protocols such as Cramer et al. (1997), Yang et al. (2018), Chen et al. (2008), Yang et al. (2020), Gao et al. (2019), Larriba et al. (2021), Larriba and López (2022) and our solution involve multiple parties in the voting process, which improves the security guarantees against an adversary colluding with a voting authority. As discussed in the related work, depending on the underlying technology, each multi-party protocol may have additional tradeoffs apart from the total amount of messages: a homomorphic solution may require additional zero knowledge proofs to guarantee the verifiability of the results, and it can be difficult to protect the privacy of the voters in a system implemented on a blockchain.
However, when comparing our solution with the rest of the electronic voting protocols, we want to emphasize that only a subset of the existing protocols address the resolution of disputes during the elections (Basin et al., 2020; Bougon et al., 2022; Larriba and López, 2022) as covered by Table 1, and that our protocol is the only one that considers the resolution of disputes related to vote suppression, while providing individual accountability through the whole election process. Therefore, our protocol effectively improves the security of elections against covert adversaries colliding with voters or authorities.
Table 3
Summary of messages sent by the voter and the total of messages for voting protocols and the primitives used. k represents the number of authorities in multi-party protocols, and v the number of processed votes. The * symbol represents systems deployed using blockchain. The symbol † represents systems that combine e-voting and physical booths. The system Themis does not define the details of the encryption used or the tally method, represented by ‡.
| Protocol | Voter’s messages | Total messages | Cryptographic primitives |
| Chaum (1981) | 2 | 2v | Blind signatures |
| Li et al. (2009) | 5 | 3v | Blind signatures |
| Nguyen Thi and Dang (2013) | 4 | 7v | Blind signatures |
| Larriba et al. (2020) | 3 | 2v+1 | Blind signatures |
| Cramer et al. (1997) | 1 | v+k | Homomorphic prop. |
| Yang et al. (2017, 2018) | 2 | 2(v+k) | Homomorphic prop. |
| Tornos et al. (2014) | 3 | 3v+1 | Ring signatures |
| Chen et al. (2008) | 2 | 2+v+2k | Ring signatures |
| Yang et al. (2020)* | 2 | 2k | Homomorphic prop. |
| Gao et al. (2019)* | 2 | v | Ring signatures |
| Larriba et al. (2021)* | 2 | 2+v | Ring signatures |
| Basin et al. (2020) | 2 | $4v+1$ | Homomorphic prop. |
| Bougon et al. (2022)† | 16 | $26v+{\text{Tally}^{\mathrm{\ddagger }}}$ | – |
| Larriba and López (2022) | $1+k$ | $2k\cdot v$ | Blind signatures |
| This work | $3+k$ | $v(k+8)+3k$ | Blind Signatures |
Table 4
Example storage requirements for parameters of DiReCT.
| Name | Instance | Example value |
| Tally Authorities | k | 10 |
| Hash | SHA-256 | 256B |
| Signature | RSA-2048 | 256B |
| Certificate | .x509 | 3KB |
| Timestamp | 64 bits | 8B |
| ID voter | $\log (v)$ | 20B |
| ID PBB | $2\ast \log (v)$ | 21B |
| ${\text{SP}_{j}}$ | $(x,y)$ | 8B |
| $\text{H-P}$ | Hash | 256B |
| ${\text{H-SP}_{j}}$ | Hash | 256B |
| $\textit{BBallot}$ | $(k+1)\text{Hash}$ | 2816B |
| $\textit{SBallot}$ | $(k+1)\cdot (\text{Signature}+\text{Hash})$ | 5632B |
Regarding storage, DiReCT relies on the PBB to maintain the election state. Each entry in the PBB corresponds to a distinct phase of the protocol: certification request, certification response, casting receipt, tally shares, and final count. The estimation of the storage is collected in Table 5. The storage requirements of each entry are determined by the cryptographic primitives and identifiers involved, as detailed in Table 4.
For instance, a certification request (CertRq) contains the voter identity, a blind ballot signature, the voter’s certificate, and a timestamp. Using RSA-2048 signatures (256B), SHA-256 hashes (256B), and X.509 certificates (∼3KB), it results in an entry of approximately 3.7KB. Certification responses (CertRes) are much smaller (around 305B each), since they only include the signed blind ballot, the original request reference, and a timestamp. Casting receipts ($\text{Casting}$), which aggregate multiple signed shares, imply around 3.2KB each, while tally shares ($\text{Tally}$) and final count entries are compact (at 57B and 220B respectively).
Using these figures, we evaluate the storage requirements of a full election with $v={10^{6}}$ voters and $k=10$ tally authorities. The total storage is given by
which corresponds to approximately 24.5 GB of PBB data, a size well within the capacity of modern servers and, with hash-based storage, compatible with distributed ledger architectures.
Table 5
PBB storage evaluation, per type of entry and in total. Three size values are shown depending on the number of voters in the election. The size is calculated with the example values displayed in Table 4, with ten tally authorities $(k=10)$.
| PBB Entry | Content | PBB size per n. of voters | ||
| ${10^{3}}$ | ${10^{6}}$ | $5\ast {10^{6}}$ | ||
| $\text{CerRq}$ | v, $\langle {\operatorname{Sign}_{v}}(\textit{BBallot}),{\text{C}\text{ERT}_{v}}\rangle $, T | 3.7KB | 3.7KB | 3.7KB |
| $\text{CerRes}$ | $\text{VA}$, $\langle {\operatorname{Sign}_{\text{VA}}}(\textit{BBallot}),{\text{CerRq}_{v}}\rangle $, T | 285B | 305B | 311B |
| $\text{Casting}$ Receipt | ${\text{party}_{i}}$, $\langle {\text{H-SP}_{j}},{\textit{SBallot}_{j}}\rangle $, T | 3.2KB | 3.2KB | 3.2KB |
| $\text{Tally}$ | ${\text{party}_{i}}$, $\langle {\text{SP}_{i}},{\text{H-SP}_{i}},\textit{SBallot}\rangle $, ${T_{4}}$ | 37B | 57B | 63B |
| Count | $[{\text{party}_{i}},\text{count}]$ | 110B | 220B | 253B |
| Total | $\begin{array}{l}v\ast (\text{CerRq}+\text{CerRes}+k\ast \text{Casting})\\ {} \hspace{1em}+k\cdot v\cdot \text{Tally}+\text{Count}\end{array}$ | 34.6MB | 24.5 GB | 170.7 GB |
8 Conclusions
This work presented DiReCT, an electronic voting protocol that improves the current state of the art in terms of accountability. In previous definitions of dispute resolution, for elections conducted by multiple parties the requirement was to detect when any of the parties misbehaves (Basin et al., 2020), or to partially account them (Bougon et al., 2022). We extended this requirement by providing individual accountability, which allows the identification of the misbehaving party. Without individual accountability, a malicious party could abuse the dispute resolution with impunity, diluting the blame into the rest of honest parties in the election.
In our protocol, the timeliness guarantee described by Basin et al. (2020) is effective during the casting, not at the end of the election. It is our intention to emphasize the importance of the verification timeliness in electronic voting. Ensuring that certain attacks can be detected during the elections is crucial for voters, who may be able to recover and continue voting.
Furthermore, DiReCT is the first protocol that addresses dispute resolution in the case of vote suppression. Although this attack is often overlooked in the literature, it’s consequences are as effective as other election manipulation attacks.
With this work we highlight the benefits of including dispute resolution as part of voting protocols and showcase that is possible to achieve it in a multi party protocol. The threat model includes covert adversaries, which we believe is a realistic assumption for voting protocols that combines nicely with the verification and accountability focus of DiReCT. Regarding the complexity and scalability analysis of DiReCT, although the number of messages needed by DiReCT increments with the number of tally authorities, it is still linear with the number of processed votes. Our analysis shows a reasonable estimated storage needed in the PBB (Table 5), only needing 25.5 GB for an election with one million voters. As a future work, we will address the formal verification and resolution of our results. We will also explore the usage of zero knowledge proofs as in Iovino et al. (2020), in particular regarding the benefits and trade-offs in terms of accountability.
, Themis by Bougon et al. (
, and this work has complete individual accountability
.