Signatures are primarily used as a mark of authenticity, to demonstrate that the sender of a message is who they claim to be. In the current digital age, signatures underpin trust in the vast majority of information that we exchange, particularly on public networks such as the internet. However, schemes for signing digital information, which are based on assumptions of computational complexity, are facing challenges from advances in mathematics, the capability of computers, and the advent of the quantum era. Here, the authors present a review of digital signature schemes, looking at their origins and where they are under threat. Next, the authors introduce post-quantum digital schemes, which are being developed with the specific intent of mitigating against threats from quantum algorithms while still relying on digital processes and infrastructure. Finally, the authors review schemes for signing information carried on quantum channels, which promise provable security metrics. Signatures were invented as a practical means of authenticating communications, and it is important that the practicality of novel signature schemes is considered carefully, which is kept as a common theme of interest throughout this review.

Physical signatures are marks made to identify or authenticate the creator of a message or artifact. Their precise origins are lost to history, but they are associated with some of the earliest records of pictographic scripts, dating back at least 5 millennia.1 The information and telecommunication revolution in the second half of the 20th century would not have happened without a practical means to authenticate messages, which led to the invention of digital signature schemes.

Schemes for signing digital information are a direct, albeit stronger, analogue to physical signatures; they seek to ensure (i) authenticity of any claim regarding a message sender's identity, (ii) that the message has not been altered by any parties since the signing, and (iii) that the sender cannot refute that it was indeed them who signed the message. In Sec. II, we review the origins of digital signatures and explore some of the important bases for signatures in the modern world. From here, we review their applications and vulnerabilities associated with assumptions made in their foundations, while discussing a select few ways in which signatures have evolved for standardization and specific use cases.

Post-quantum cryptography focuses on building classical algorithms whose security is resistant to known capabilities of quantum algorithms. Post-quantum signature schemes build upon early work in digital signatures. In Sec. III, we review progress made in this field and look closely at the resources required to implement these emerging algorithms.

It has been shown that algorithms using information encodes on quantum states can be used for secure communication protocols that are not dependent upon unproven assumptions, but instead are provably secure within the laws of physics itself. Section IV discusses the application of quantum information and communications to signature schemes.

The pursuit of secure digital signature schemes was of great importance in 20th-century cryptographic research. Digital signatures are considered to be a cryptographic primitive with widespread application and use, with legal precedence in some jurisdictions. Like their physical counterparts, digital signatures are indeed used to authenticate the sending of a message, but their strength as a primitive protocol does not end here. Since their introduction in 1976 by Diffie and Hellman,2 further applications have been found in the building of secure distribution schemes, digitally processed financial transactions, cryptocurrencies,3 and more. It is known that a primitive analogue to digital signatures was developed decades before Diffie and Hellman made their public contributions, with the earliest known notion of authentication by some form of digital signature being a challenge-response mechanism used by the US Air Force to identify friendly aircraft, as far back as 1952.4 Remarkably, even national identity and national government systems can be built with digital signatures at their core, as witnessed in Estonia's use of blockchain-style security for their Identity Card system and eResidency scheme offered to International visitors and investors.5 

In Sec. II, we define digital signature schemes and their relevant security notions, as well as providing formal schematics of simple implementations of such schemes from the literature.

Definition 1 (Digital Signature Scheme). A digital signature scheme is a cryptographic protocol consisting of two distinct algorithms:

  • A signing algorithm, in which the signing party (Alice), given a message (and typically a private key), produces a signature

  • A verification algorithm, in which, given the message and signature, the verifier (Bob) either accepts or rejects Alice's claim of authenticity of the message.

Digital signatures can fall into one of the following two categories, based on parties involved:

  • True signatures: Requiring only two parties, Alice (the signer) and Bob (the receiver), true signature schemes involve the transmission of information directly from Alice to Bob, typically in the form of a message-signature pair and most often using asymmetric key cryptosystems (public key cryptography).

  • Arbitrated signatures: Requiring a trusted third party, Charlie (the arbiter), this type of scheme involves two distinct rounds of communications: Alice's communication to Charlie and Charlie's communication to Bob. In this setup, Charlie provides verification to Bob, and the landscape is opened up for the use of symmetric key cryptosystems (private key cryptography).

Digital signature schemes are typically preceded by some form of key generation (and distribution if necessary), allowing us to express all signature schemes in terms of the following three steps:

  • GEN: A key generation algorithm producing a private key (or a set of private keys) and, if necessary, public keys.

  • SIGN: Signature generated with a signing algorithm and sent to Bob.

  • VER: Bob receives the signature and follows a verification algorithm before deciding whether or not to trust Alice's claim.

For any signature scheme to be considered secure and trustworthy for use, we require the scheme to provide the following under any and all conditions:

  1. Authenticity: The receiver, Bob, when accepting a signature from Alice is convinced that the author of the message was indeed Alice.

  2. Integrity: The receiver, Bob, can have faith that the message has not been altered since it left Alice.

  3. Nonrepudiation: Once a genuine signed message has left Alice, she has no way to convince Bob that she was in fact not the author.

One more property often sought in signature schemes, but not strictly required for security, is for the signature to be able to be transferred. A signature scheme satisfying the above three conditions will convince Bob that Alice is indeed the author of the message, but transferability provides the ability for Bob to convince a third party, Charlie, that the message is indeed from Alice, without compromising the security of the system.

Attacks on signature schemes are known to typically fall into one of the four following categories:

  1. Key-only attack: An adversary, Eve, knows only the public key of Alice (the signer).

  2. Known-signature attack: Eve has access to Alice's public key and message-signature pairs produced by Alice.

  3. Chosen-message attack: Eve may choose a list of messages (m1,m2,...,ml), which she will ask Alice to sign.

  4. Adaptively chosen-message attack: Similar to the above, Eve alone has knowledge to adaptively choose messages based on the resulting message-signature pair of the previously signed message, allowing her to perform cryptanalysis with greater precision.

We may describe the level of success achieved by Eve, from the greatest success to least success, as follows:

  1. Secret key knowledge: Eve discovers all the secret information (typically Alice's secret key).

  2. Universal forgery: Eve is able to forge the signature of any message, but lacks the secret key itself.

  3. Selective forgeries: Eve can forge the signature for some messages of her choosing, but cannot do this arbitrarily.

  4. Existential forgery: Eve may forge the signature for at least one message, but lacks the ability to choose this message from the set of all possible messages.

  5. Failure: Eve finds out nothing about the secret information and fails to forge a convincing signature for any message.

Clearly, Eve achieving universal forgery or above would render the signature scheme completely invalid, as she could go on to convince Bob (and other parties) that Alice has signed any message (or at least fail to be rejected with utmost confidence). When discussing full security for a signature scheme, it is typical to demand that it does not allow any form of success, i.e., not even existential forgery, under any computing assumptions (or none).

That existential forgery is considered not permissible may seem a somewhat “strong” requirement; we could easily suppose that, given Eve's inability to choose a message, we could simply require a very large message space and propose that a message containing “gibberish” would not be accepted by Bob. In the case of sending email communications, this may, at first, seem suitable. It seems reasonable for Bob to expect Alice's message to make sense in their chosen language, and given Eve has no control over the message contents, we might expect her to have difficulty randomly selecting a perfectly coherent message. However, given a scenario in which Alice is simply sending a number, related to an amount in currency she is requesting Bob to send her, an existential forgery would carry great threat! Eve might not be able to choose a precise amount, but it would be hard for Bob to label a string of integers as nonsensical.

With research in digital signatures growing alongside research in public key cryptography,4 the majority of well-known and well-studied signature schemes arise from public key cryptosystems. These typically rely upon certain mathematical assumptions about the hardness of problems (signature schemes based on symmetric encryption are generally reserved for arbitrable setups). An often-seen method of building a signature scheme is as follows: find some public key cryptosystem based on one-way functions or trap-door functions, generate a signature using Alice's private key in the system, and allow any party to verify that Alice indeed sent the message using the publicly shared encryption key. Well-known cryptosystems used in such a way include RSA,6 ElGamal,7 Rabin,8 and Fiat–Shamir.9 We remark that (in a simplified manner) the main property that distinguishes a one-way function from a trap-door function is the existence of trap-door knowledge, some secret that allows the (usually) hard to invert function to become easily invertible.

1. Modular exponentiation and the RSA cryptosystem

A variety of trapdoor functions can be built based on performing exponentiation modulo n, depending on how we choose n. The simple act of squaring modulo n where n = pq for some prime p, q forms a trap-door function, in which the trap-door knowledge is the prime factors (p and q). We can build further trap-door functions with different exponents by carefully choosing the exponent. Again, working in modulo n such that n = pq for large primes p, q, if we choose some e such that e is coprime with ϕ(n)=(p1)(q1) (the Euler totient function of n), we find that for any given x,

is a trapdoor function, where the trapdoor is once again the prime factors p, q. This forms the basis of the RSA cryptosystem. While the work of Diffie and Hellman in 1976 may have built the theoretical bench on which research could seek to implement digital signatures, it was a later paper by Rivest, Shamir, and Adleman that first exemplified a proof-of-concept on top of this workbench. The well-celebrated RSA paper6 published in 1978 marked an early showcasing of asymmetric cryptosystems, well establishing the idea of public-key cryptography in a format that is in widespread use today. Relying on exponentiation under some n = pq for large primes p and q, the RSA cryptosystem can be used to send encrypted data securely (under assumptions), and the same methodology can be used to implement a signing algorithm securely. For the basis of an RSA implemented cryptosystem, private keys and public keys must be created for use in the trap-door function, all of which is formalized as follows:

  • Trap-door function: In the case of RSA, we take the trap-door function on some bit-string m to be

    Call RSAE the RSA encryption function using key e and RSAD the RSA decryption function using key d, defined below. We require that n = pq for some large primes p and q and choose e such that for ϕ(n)=ϕ(pq)=(p1)(q1), we have gcd(e,ϕ(n))=1, where gcd means the greatest common divisor and 1<e<ϕ(n).

  • Public key: RSA takes as its encryption key the above chosen value, e. e is part of the public information in the cryptosystem, along with our chosen n for modular arithmetic.

  • Decryption key: One calculates the decryption key d, by determining the multiplicative inverse of e mod ϕ(n), i.e., determining de1(modϕ(n)). The decryption exponent, along with the prime factors p, q, of n, is kept secret. d can be easily calculated when p and q are known.

We denote a message as m, with C being the resulting ciphertext following encryption on m and S a signature (generated from a message m). The leading motivation for the RSA cryptosystem is its use for easy encryption of a message, as shown in Fig. 1. Anyone with knowledge of the publicly shared information (e and n) can easily encrypt a message m by performing C=memodn. The intended recipient of the secret message, Alice, can easily recover m=Cdmodn, and anyone lacking knowledge of d who intercepts C will struggle to find m from just the public knowledge. Loosely, the “hardness” of recovering m without d relies upon the hardness of discovering d without knowledge of p and q. We then see that, really, the security of the RSA cryptosystem reduces down to the intractability of factoring n into its prime factors p, q. This form of problem reduction is seen throughout digital signatures and indeed all of cryptography.

Fig. 1.

Schematic demonstrating communications secured using the RSA protocol. Bob encrypts (E) a (private) message m, using the RSAE encryption function and the public key, e. The encrypted message, C, can now be sent publicly to Alice, who uses the RSAD decryption function and the private key d, to retrieve m. Circles represent information that can be presented publicly, while diamonds must remain private.

Fig. 1.

Schematic demonstrating communications secured using the RSA protocol. Bob encrypts (E) a (private) message m, using the RSAE encryption function and the public key, e. The encrypted message, C, can now be sent publicly to Alice, who uses the RSAD decryption function and the private key d, to retrieve m. Circles represent information that can be presented publicly, while diamonds must remain private.

Close modal

While the above demonstrates RSA's use as a tool to allow any party to transmit secret messages to a recipient Alice, it is easy to use the same tools to allow Alice to sign a message that can be verified by any party (Fig. 2). Suppose Alice seeks to send a message, m, to some party Bob who wishes to verify that this message was not sent by some third party. This can be achieved by both parties carrying out the following:

  1. Utilizing her secret decryption key, Alice can now compute RSAD(m)=mdmodn=Sm.

  2. Alice sends the message m to Bob, along with the associated signature, Sm.

  3. Bob simply calculates RSAe(Sm)=Smemodn=(md)emodn=md·emodn=m.

From this, it is clear that Bob can be convinced that only Alice (or someone with Alice's secret decryption key d) could have sent this message. As e and n are public knowledge, any other party may also be convinced of this, allowing transferability of the signature.

Fig. 2.

Schematic demonstrating message (m) signing using the RSA protocol. Alice computes a signature S using the RSAD decryption function, her private key d, and a (public) message m. Both m and S can now be sent to Bob via a public channel. Bob can now compute m to be stored privately, using the RSAE encryption function, the retrieved signature S, and the public key e. If m matches m closely (according to some predetermined error rate), the signature is accepted as valid. Otherwise, Alice is not accepted as the author of m. Circles represent the information that can be presented publicly, while diamonds must remain private. Note the public and private variants of the message on Bob's end of communications. The private m is calculated from the signature, and its value is checked against the publicly sent m.

Fig. 2.

Schematic demonstrating message (m) signing using the RSA protocol. Alice computes a signature S using the RSAD decryption function, her private key d, and a (public) message m. Both m and S can now be sent to Bob via a public channel. Bob can now compute m to be stored privately, using the RSAE encryption function, the retrieved signature S, and the public key e. If m matches m closely (according to some predetermined error rate), the signature is accepted as valid. Otherwise, Alice is not accepted as the author of m. Circles represent the information that can be presented publicly, while diamonds must remain private. Note the public and private variants of the message on Bob's end of communications. The private m is calculated from the signature, and its value is checked against the publicly sent m.

Close modal

This (simplified) view of RSA demonstrates many fundamentals of digital signatures within a classical framework: The need for a cryptosystem whose security we have good reason to believe in, even if it is not provable (the assumption of intractability presents issues for provable security, see Sec. II F), the ability to use this cryptosystem for signing (or at least modifying the cryptosystem for signing), and the need to ensure the three pillars of security for digital signatures: authenticity, integrity, and nonrepudiation, while also hoping for the (at times less essential) property of transferability.

2. A note on symmetric digital signatures

While much of this section, along with the literature, focuses on asymmetric signature schemes whose roots lie in public key cryptosystems, this does not mean symmetric signature schemes arising from private key cryptosystems have no value in both research and application. Given the context (Alice signing a message, Bob verifying), it is easy to see why a private-key cryptosystem utilizing the same (symmetric) key for both encryption and decryption is considered a weak arrangement for digital signatures: that both Alice and Bob use the same encryption key means either party can imitate the other. As long as Bob knows the encryption transformation used by Alice, he can always use the private key to generate a signature to deceive an unwitting third party, Charlie. The sought-after property of transferability is clearly lost. The potential applications for (secure) symmetric signatures are much smaller than those of asymmetric signatures. Both parties must trust each other to not be deceitful, which is far from practical for most settings (especially given that the parties may have no knowledge of each other prior to the signing and verification). However, such systems are in use: for financial institutions, they can be very beneficial as they are (often) less computationally taxing than their public key counterparts, and given a scenario where neither party has any reason to doubt the other's intentions [such as an automated teller machine (ATM) communicating with its parent financial institution], they can be used to a great effect.

1. Modular squaring

The function xx2modn for some x and n = pq, given p, q are prime, forms a trap-door function in which the trap-door information, like in RSA, is knowledge of the prime factors p, q of n. Rabin8 introduced a cryptosystem whose core is reliant on utilizing squaring under modular arithmetic as a trap-door function. The use of Rabin's system, along with some hashing function, can produce a signature scheme with certain advantages over RSA. It is unknown whether or not breaking RSA is actually as difficult as factoring: the security reduction that reduces RSA to factoring remains unproven as the RSA assumption. The potential that there may exist a security reduction from RSA to an easier problem than factoring is of concern. However, Rabin proved that breaking his cryptosystem is as difficult as the factoring problem. Thus, unlike RSA, cryptographers can find strength in its security as long as factoring remains intractable.

2. The discrete logarithm problem

Throughout this section, we have treated the RSA cryptosystem as a tool by which to lay out the general case for, and elucidate on general points within, digital signature schemes. Principally, we have opted for this approach as the RSA cryptosystem is a well-studied example of a cryptosystem built upon the notion of a trapdoor function. As we have previously covered, this constitutes a popular method of construction, allowing us to perform both encryption and signing. Yet not all public-key cryptosystems, or all cryptosystems used to deploy signature schemes, must be reliant upon trapdoor functions. The discrete logarithm problem is one such example of a one-way function with no trapdoor information that is successfully implemented in widely used signature schemes. Working within finite fields, and letting p be a prime and g some primitive root in p*, the function

is a one-way function with no trap-door knowledge. dExp (the discrete exponential function) is easy to compute under a finite field, but its inverse, dLog, is believed to be intractable (the discrete analogue of the logarithmic function is hard to compute, i.e., it is hard to find x from dExp(x)=gx) and there exists no “trap-door” information that makes this inverse easily computable. The assumption about the intractability of computing x is known as the discrete logarithm assumption, and can form the basis of a cryptosystem, most notably one devised by ElGamal,7 and has applications in signing of information.

1. Signature scheme resistant to adaptive chosen message attacks

Adaptively chosen-message attacks, as defined in Sec. II A, utilize cryptanalysis of message-signature pairs (signed by Alice) as a powerful tool in a malicious party's (Eve's) pursuit of existential forgery against a given digital signature scheme. It is known that many well-studied cryptosystems are susceptible to such an attack, including RSA. In 1988 Goldwasser, Micali, and Rivest (henceforth GMR) gave a thorough treatment of their signature scheme,10 while proving its security against adaptive chosen message attacks. Like many of the schemes preceding them, GMR's is reliant on trapdoors. However, GMR introduced the notion of claw-free permutation pairs,11 and claimed that signature schemes utilizing trap-doors and claw-free permutations could produce an additional degree of security against adaptively chosen-message attacks when compared to the then-traditional method of “simple” trap-door schemes. While the scheme itself is not as simple and easily presentable as RSA, the basic notion behind the claw-free permutations is simple to see:

Definition 2. Given a set of numbers, (x, y, z), we call them a claw of two permutations f0 and f1 if
Further, we define a pair of permutations f0, f1 to be claw-free if there exists no efficient algorithm for computing a claw given two permutations.
GMR proved that the existence of such permutations implies the existence of a signature scheme ε-secure against adaptively chosen-message attacks, i.e., Eve achieves existential forgery with probability <ε. Additionally, they presented a method of construction for practical claw-free permutations, utilizing mathematical theory relevant to quadratic residues (an extensively studied tool in number theory, cryptosystems, and cryptanalysis12) in order to find piecewise functions,
and
where g0, g1 are piecewise constant functions. There exist functions in the form of f0, f1 that form claw-free permutations.13 GMR show that, via contradiction, Eve's attempts of cryptanalysis to achieve existential forgery can be reduced to finding a claw for the pair of permutations and thus fail, even if the trap-door functions used independently remain vulnerable to adaptively chosen-message attacks.14 

2. Hashing

Typically, when performing RSA with the RSA-encryption and decryption functions RSA{E,D}(m)=m{d,e}modn, with message m, encryption and decryption exponents e and d, respectively, and modulus n, we take n to be some 1024-bit number. We bear in mind that, if sending a message in some text-based language, we are left with (at most) 128 ASCII characters. Assuming the language chosen is well-defined with a set of rules, we can assume most documents that need signing to be greater than this stringent limit. In order to allow the signing of messages and documents of arbitrary length, cryptographers typically turn to hash functions.

Definition 3 (Hash). Simply, a Hash function, H, is a function taking in as its input some data of arbitrary length and outputting a hash digest (or, simply, hash or digest) of a fixed length.

For use in cryptography, we generally seek the following three properties from a hash function:

  • Pre-image resistance: Given a hash digest h, finding any message m with h=H(m) should be a difficult task. (We can consider the similarity between this property and that of the one-way function.)

  • Collision resistance: The essence behind collision resistance is that there should be a very low probability of finding two messages outputting the same digest. Collision resistance is typically categorized into one of the two groups: Weak collision resistance, in which for a given message m1, it should be difficult to find a message m2 with H(m1)=H(m2) when m1m2, and Strong collision resistance, in which it should be difficult to find two messages m1m2 such that H(m1)=H(m2).

Generally, it is favorable that these properties define a platform upon which a malicious adversary cannot modify the input data without changing the digest. Further, we desire a good distribution of digests, that is, given two n-bit-strings m1 and m2 with a small Hamming distance ε, we seek very different outputs, i.e., a (relatively higher) Hamming distance between H(m1) and H(m2). Clearly, the overarching goal of creating a good hash function is that an adversary should find it very hard to determine the input of a hash, and cryptanalysis by attacks involving similar messages should be unable to find a weakness here.

Full security in the random oracle model can be achieved using a full domain hash function, in which the image of the hash function is equal to the domain of the RSA function. However, most types of RSA widely used do not implement full-domain hash functions, instead opting for hash functions such as SHA, MD5, and RIPEMD.15–17 

3. Probabilistic signatures

In 1996, Bellare and Rogaway introduced the notion of the probabilistic signature scheme (PSS),18 in which the signature generated is dependent upon the message and a randomly chosen input. This results in a signature scheme whose output for a given message does not remain consistent over multiple implementations. Utilizing a trap-door function (typically one well-used in nonprobabilistic schemes, such as the RSA function), a hash function and some element of randomness (typically a pseudorandom bit generator), a signature scheme that is probabilistic in nature may be implemented. Such schemes can be used to sign messages of arbitrary length and to ensure that the message m is not recoverable from just the signature of m. RSA–PSS is a common probabilistic interpretation of the RSA signing scheme that forms part of the Public-Key Cryptography Standards (PKCS) published by RSA laboratories.19 

We have already discussed how hashing may be used before signing a message (along with padding) to ensure that all messages signed are of an appropriate size. However, the use of hashing in digital signatures extends beyond the “Hash and sign” idea used for signing protocols such as RSA. A protocol introduced by Fiat and Shamir9 has led to the creation of the Fiat–Shamir paradigm. The Fiat–Shamir paradigm takes an interactive proof-of-knowledge protocol20 and replaces interactive steps with some random oracle, typically a publicly known collision hash function. A thorough treatment of the paradigm can be found in Delfs' and Knebl's textbook on cryptography.21 In addition to their use in creating signature schemes that are secure against adaptively chosen-message attacks, it has been shown by Damgard13 that claw-free permutations can play a role in creating collision-resistant-hash-functions (this should not seem too surprising, as it is easily recognized that their definitions are similar: Collision-resistant hashing can almost be seen as a single-function analogue of claw-free permutations).

The work of ElGamal on cryptosystems making use of the one-way nature of the discrete logarithm forms the basis of the Digital Signature Algorithm (DSA),22 a cryptographic standard popular since its proposal as a National Institute for Standards and Technology (NIST) submission for the Digital Signature Standard, DSS.

Recent years have seen increased interest in electronic voting, a concept heavily reliant on signature schemes. Electronic voting typically requires a cryptosystem that is both probabilistic and holds homomorphic properties. ElGamal is a good example of an applicable cryptosystem. Electronic voting has been used in a variety of countries, including the US (the 2000 Democratic Primary election in Arizona23 is often cited as a landmark event in internet voting); Scottish Parliament and local elections since 2007 (although the 2007 elections can be considered good proof as to why great care must go into researching the implementation of these systems before use24), Brazil25 (whose 2010 presidential election results were announced just 75 min after polls closed thanks to electronic voting), and India, with the state of Gujarat being the first Indian state to enable online voting in 2011.26 In Europe, Estonia also utilizes electronic voting,5 with the idea of the Estonian digital ID-card, which provides a digital signature, being pivotal in how government and society are run in the Baltic country.

Another subfield of cryptographic research that has garnered increased interest in recent times is Elliptic Curve Cryptography. Schemes based on the discrete logarithm problem (such as ElGamal/DSA) can be implemented similarly on the mathematical framework of elliptic curves27 instead of finite fields. A key benefit of deploying a cryptosystem in such a way is the ability to perform computations at shorter binary lengths than traditionally used, without conceding security. This makes such schemes good candidates for when resources are limited, and Elliptic Curve DSA (ECDSA)28 is an example of such a scheme that forms a cryptographic standard and is included in the Transport Layer Security (TLS) protocol.29 

In 1979, Ralph Merkle patented the concept of the hash tree, commonly known as the Merkle tree after him. Merkle trees can be paired with one-time signature schemes (within a symmetric cryptographic framework) to form a Merkle-Tree Based Signature scheme.30 Such schemes still remain only suitable for one-time use although the work of Naor and Yung explores an extension of these types of schemes to complete multiuse signature schemes. It is believed that such signature schemes may be resistant to quantum-attacks, which are mentioned below and discussed further in Sec. II.

As we have seen, the vast majority of widely implemented cryptographic algorithms (especially those that rise from public-key cryptosystems) rely upon unproven mathematical assumptions about the hardness of certain problems in order to provide us with security. This review is by no means expansive on the workings of different signature schemes under various cryptosystems, and a reader seeking a thorough treatise of the field may turn to Simmons31 (for an exploration of early public key cryptosystems and signatures) and Delfs–Knebl21 (for a treatise of modern cryptography, with extensive sections on signatures). With the increase in research in applications of quantum theory to modern technology, these previously held assumptions are left to fall apart in front of us. Since Deutsch's introduction of the Universal Quantum Computer,32 research in utilizing the power of quantum theory for computing has yielded many strong theoretical results, with early work including the development of algorithms for a quantum computer that can perform certain tasks faster than a classical computer is believed to be able to. Included in these is Shor's algorithm,33 which can perform prime factorization at a speed that would allow currently implemented cryptosystems to be broken. While the practical implementation of such algorithms is yet to yield results strong enough to cause immediate worry, research is still looking forward to ensure that security shall not be compromised as quantum computers grow more powerful.

As previously mentioned, advances in quantum computing have raised concerns for the field of classical cryptography. Here, we give a brief overview of why, followed by a discussion of the responses from the cryptographic community.

1. Quantum cryptanalysis of classical cryptography

In an era where popular thinking was that problems based on factoring would be unbreakable, the introduction of Shor's algorithm33 in 1994 caused uncertainty in the security of cryptosystems that were previously assumed to be secure. This review gives a brief overview of the techniques used.

Factoring a composite number N can be reduced to the problem of finding the period of a function. This is done by picking a random number a<N, checking that gcd(a,N)=1 (if gcd(a,N)1, then we have found a factor of N and we are done), and then looking for the period of the function,

(4)

Up to this stage, this can all be achieved classically. The quantum Fourier transform is used to find the period, resulting in Shor's algorithm being extremely efficient and appearing in the complexity class BQP (bounded-error quantum polynomial-time).34 This is almost exponentially faster than the fastest known classical factoring algorithm, the general number field sieve.35 

This period solving algorithm can also be used to solve the discrete logarithm problem,36 which also breaks the hardness assumption of this problem. From this, Shor's algorithm can be extended to a more general problem: the Hidden Subgroup Problem (HSP).37–39 The HSP states that given a group G, a finite set X, and a function f:GX that hides a subgroup HG, we seek to determine a generating set for H only given evaluations of f. We say that a function f:GX hides H if, for all g1,g2G,f(g1)=f(g2) if and only if g1H=g2H. Within this framework, Shor's algorithm can be seen as solving the HSP for finite abelian groups. Other problems can similarly be generalized to this framework. For instance, if a quantum algorithm could solve the HSP for the symmetric group, then one of the key hard problems for Lattice Cryptography (see Sec. III C)—the shortest vector problem—would be broken.40 

Whilst Shor's algorithm has received the most attention for the problems it causes in cryptography, it is by far not the only quantum algorithm to attack current schemes. Grover's search algorithm41,42 can be used in certain schemes, and other factorization algorithms such as the quantum elliptic-curve factorization method43 have had some success. We point the reader in the direction of Bernstein et al.44 and Jordan and Liu45 for more complete surveys on quantum cryptanalysis of classical cryptography.

2. What is being done?

While current estimates place the development of practical quantum computers capable of posing a security threat many years in the future (at time of writing, the record for using Shor's algorithm to factor a “large number” into two constituent primes stands at 21=3·7,46 though much larger factorizations have been achieved in the adiabatic case47), it is pertinent to replace our current systems well in advance of that. In light of that, the National Institute for Standards and Technology (NIST) put out a call for submissions in 201648 to attempt to set a new quantum-secure standard. This ongoing project aims to find new standards for both public key encryption and digital signatures.

The NIST evaluation criteria49 for these new schemes set out both required security levels and computational cost. For the security levels, it is assumed that the attacker has access to signatures for not more than 264 chosen messages using a classical oracle. The security levels are grouped into broad categories defined by easy-to-analyze reference primitives—in this case, the Secure-Hash-2 (SHA2)15 and the Advanced Encryption Standard (AES).50 The rationale behind the seemingly vague categories is that it is hard to predict advances in quantum computing and quantum algorithms, and so rather than using precise estimates of the number of “bits of security,” a comparison will suffice. See Table I for the exact security levels.

Table I.

NIST security levels.

LevelReference primitiveSecurity equivalence
AES 128 Exhaustive key search 
SHA 256 Collision search 
AES 192 Exhaustive key search 
SHA 384 Collision search 
AES 256 Exhaustive key search 
LevelReference primitiveSecurity equivalence
AES 128 Exhaustive key search 
SHA 256 Collision search 
AES 192 Exhaustive key search 
SHA 384 Collision search 
AES 256 Exhaustive key search 

For quantum attacks, restrictions on the circuit depth are given, motivated by the difficulty of running extremely long serial quantum computations. Proposed schemes are also judged on the size of the public keys and signatures they produce as well as the computational efficiency of the key generation.

As of July 22, 2020, the NIST project to set a new quantum-secure cryptographic standard entered the third round of submissions,51 with only six proposals for digital signatures remaining. These are further split into three finalists and three alternative candidates. The finalists are the algorithms that have shown the most promise and are general purpose for the most part. Those in the alternative candidate track are considered either tailored to more specific applications or require more time to mature. Round three is predicted to take around eighteen months, though due to the ongoing global pandemic as of October 22, 2020, the schedule is much looser. Following the third round, NIST aims to continue the review process, allowing for some alternative candidates to be standardized at later date as well as giving considerations to ideas that were developed too recently to be included in the initial round in 2016.

From these, two front runners have emerged for the mathematical basis, which will replace our current systems: Multivariate and Lattice cryptography. See Table II for how these fall into the different categories.

Table II.

NIST digital signature submissions by the underlying mathematical structure type.

TrackMultivariateLatticeOther
Finalist Rainbow52  Dilithium53   
  FALCON54   
Alternative MQDSS55   Picnic56  
   SPHINCS+57  
TrackMultivariateLatticeOther
Finalist Rainbow52  Dilithium53   
  FALCON54   
Alternative MQDSS55   Picnic56  
   SPHINCS+57  

Here, we will give a brief overview of both Multivariate- and Lattice-based cryptography, followed by a note on the other schemes.

Multivariate cryptography was developed in the late 1980s with the work of Imai and Matsumoto.58 Originally named C* cryptography after the first protocol, the name Multivariate Cryptography was adopted when the work of Patarin broke and then generalized the C* protocol.59 After Shor's now infamous algorithm was developed, it was realized that the structure of Multivariate Cryptography could be used as a direct response. For a more in-depth treatment of the subject, the reader may consult Bernstein.44 or Wolf.60 

All the schemes are based on a hard problem that is relatively straightforward to understand (though several of the constructions extend the basic problem to more complex settings): the problem of solving multivariate quadratic equations over a finite field. That is, given a system of m polynomials in n variables,

(5)

and the vector y=(y1,,ym)Fm, find a solution xF that satisfies the above equations. Formally, we say that for a finite field F of size q:=|F|, an instance of an MQ(Fn,Fm)-problem is a system of polynomial equations of the form,

(6)

where 1im and γijk,βij,αiF. These are collected in the polynomial vector P:=(p1,,pm). Here, MQ(Fn,Fm) denotes a family of vectorial functions P:FnFm of degree 2 over F,

(7)

While theoretically the polynomials could be of any degree, there is a trade-off between security and efficiency. Higher degrees naturally have larger parameter spaces, but too low a degree would be too easy to solve and, therefore, would be deemed too insecure. Quadratics are chosen as a compromise between the two.

The final two pieces required for signatures are two affine maps, SAff1(Fn) and TAff1(Fm). Both these can be represented in the usual way:

(8)
(9)

where MSFn×n,MTFm×m are invertible matrices and vSFn,vTFm are vectors.

For most multivariate signature schemes, the secret key is the triple (S1,P1,T1), where S and T are affine transforms and P is a polynomial vector [defined similar to Eq. (5)], known as the central equation. The choice of the shape of this equation is largely what distinguishes the different constructions in multivariate cryptography. The public key is then the following composition:

(10)

To forge the signature, one would have to solve the following problem: for a given PMQ(Fn,Fm) and rFm, find, if any, sFn such that P(r)=s. It was shown in the study by Lewis et al.61 that the decisional form of this problem is NP-hard, and it is believed to be intractable in the average case.62 

We now formally outline the general scheme for signatures based on the Multivariate Quadratic problem:

  1. Alice generates a key pair (sk,pk), where sk=(S1,P1,T1) and pk=P=S°P°T, and then distributes pk.

  2. Alice then hashes the message, m, to some cFn using a known hash function and then computes

    sending the pair (m,s) to Bob.

  3. Bob then needs to check that P(s)=c=H(m) for the known hash function H.

See Fig. 3 for a diagram of how the signature schemes work. This forms the backbone of most multivariate schemes. Subsections III B 1–III B 3 examine several of the adaptations of this framework employed in various NIST submissions.

Fig. 3.

A general schematic for Multivariate Quadratic signature schemes. Alice hashes the message m to some vector cFn, which is transformed under the affine transform c=T1(c). The central equation is then applied, s=P(c), followed by a second affine transformation to create the signature s=S1(s). To check the signature, Bob only has to recover ĉ=P(s)[=S(P(T(s)))] and confirm that it matches the hash value H(m)=c.

Fig. 3.

A general schematic for Multivariate Quadratic signature schemes. Alice hashes the message m to some vector cFn, which is transformed under the affine transform c=T1(c). The central equation is then applied, s=P(c), followed by a second affine transformation to create the signature s=S1(s). To check the signature, Bob only has to recover ĉ=P(s)[=S(P(T(s)))] and confirm that it matches the hash value H(m)=c.

Close modal

1. Unbalanced oil and vinegar

The oil and vinegar scheme was first introduced by Patarin,63 but was broken by Kipnis et al.64 and generalized to the now common Unbalanced Oil and Vinegar (UOV) protocol.

Definition 11 (Unbalanced Oil and Vinegar): Let F be a finite field, o,v such that o + v = n and αi,βij,γijkF for 1iv and 1jkn. Polynomials of the following form are central equations in the UOV shape:
(12)
The first x1,,xv terms are known as the vinegar terms, and the second registers of o=nv terms are called the oil terms. If ov, it is called unbalanced.

In these equations, the vinegar terms are combined quadratically with themselves and then combined quadratically with the oil terms, whereas the oil terms are never mixed with themselves. For a secure construction, the required discrepancy between the number of oil and vinegar terms is v2o. Unbalanced Oil and Vinegar has become one of the most common constructions for Multivariate Cryptography, and it has itself become a way of varying other constructions by putting them in an UOV shape. The NIST submission Rainbow is largely based on the unbalanced oil and vinegar scheme. One of the major problems with UOV is the length of the signatures and the key sizes, and both these submissions get around this by introducing an additional structure on top of the UOV shape. For a comparison of the signature and key size as well as other efficiency markers, see Sec. III F.

2. Hidden field equations

The Hidden Field Equation (HFE) protocol59 is a generalization of one of the original multivariate systems, the Matsumoto–Imai scheme.58 Similar to oil and vinegar, the underlying scheme was broken before the underlying trapdoor was generalized. However, unlike UOV, this scheme uses more than one field: the ground field F and its nth-degree field extension E, that is, E:=F[t]/f(t), where f(t) is an irreducible polynomial over F of degree n.

Definition 13 [Hidden Field Equations (HFEs)]. Let F be a finite field with q:=|F| elements, E its nth-degree extension field, and ϕ:EFn the canonical, coordinate-wise bijection between the extension field and the vector space. Let P(X) be a univariate polynomial over E with
(14)
where
for i,j and a degree d. We say that central equations of the form P:=ϕ°P°ϕ1 are in HFE shape.

The GeMSS submission to the NIST proceedings uses hidden field equations although it adapts the form using “minus and vinegar modifiers.”65 This has allowed the design to become more flexible in its choice of security parameters while improving efficiency.

3. Attacks

The cryptanalysis of multivariate schemes comes in two forms:

a. Structural

These focus on taking advantage of the specific structural faults in the design on different protocols. Included among this are attacks on a form of Multivariate cryptography called MINRANK66 and the hidden field equations.67 

b. General

Attacks directly try and break the underlying hardness assumption of solving multivariate equations. These include the use of techniques such as utilizing Gröbner bases to make the solving of the multivariate systems easier. For a good overview of the area, see the study by Billet and Ding.68 

Lattice cryptography was first introduced by the work of Ajtai69 who suggested that it would be possible to base the security of cryptographic systems on the hardness of well-studied lattice problems. The familiarity of these problems made them an attractive candidate for post-quantum cryptography (PQC). This led to the development of the first lattice-based public-key encryption scheme—NTRU.70 However, this was shown to be insecure and it would take the work of Regev to establish the first scheme whose security was proven under worst-case hardness assumptions.71 For an overview of the field of lattice cryptography, we direct the reader to Peikert.72 

There is a whole suite of lattice problems on which cryptographic schemes are based. Here—following some basic definitions—we will introduce the key ideas that form the foundation of contemporary Lattice Cryptography.

Definition 15. A latticeΛn is a discrete additive subgroup of n. That is, 0Λ, if x,yΛ, then x,x+yΛ, and any xΛ has a neighborhood of n, which has no other lattice points.

We note here that lattices can be more generally defined as a discrete additive subgroup of some general vector space V, but are most commonly restricted to n. Any nontrivial lattice is countably infinite; however, each lattice can be finitely generated by all the integer combinations of some set of vectors in n,B={b1,,bk} for some kn. Typically, k=n, in which case we call Λ a full-rank lattice. We call B the basis for a lattice Λ and express it as a matrix of row vectors. Often, we describe the lattice as a linear sum of integer multiplied basis vectors, writing
(16)
A basis B is not unique as any two bases B1,B2 for a lattice Λ are related by a unimodular matrix U such that B1=UB2. Another crucial lattice definition is the dual lattice.

Definition 17. Given a lattice Λ in V where V is endowed with some inner product ·,·, the dual lattice Λ is defined as Λ={vV|Λ,v}.

Here, we give a quick note about the additional structure that can be imbued in lattices. A common technique is to construct lattices embedding algebraic structures, such as rings and modules. It is beyond the scope of this paper to go into details on how one constructs these, and so we point the readers in the direction of Lubashevksy et al.73 and Grover74 for a more complete understanding of the structure. The justification for these will become clear once we introduce lattice hard problems.

Another important concept is that of Discrete Lattice Gaussians:

Definition 18 (Discrete Lattice Gaussian). Given a basis B for a lattice Λ(B), mean μn, and standard deviation σ>0, the discrete Gaussian over a lattice is defined as
(19)
where ρσ,μ(y):=exp(12σ2||yμ||) and ρσ,μ(Λ)=xnρσ,μ(Bx).

Discrete lattice Gaussian sampling is one of the core features of Lattice Cryptography, being employed in some manner in most schemes. However, this form of sampling comes with a whole host of issues. For one, it is computationally hard to sample directly from such distributions, leading to algorithms that sample from statistically close distributions.75 Unfortunately, these approximate distributions are not necessarily spherical Gaussians and, therefore, have the potential to leak information about the secret.76 Other inherent problems include finding the upper and lower bounds on the choice of variance—too low a variance also leaks information, but too high a variance will produce signatures that are insecure. See the study by Prest77 for a more comprehensive discussion on discrete Gaussian sampling (DGS).

1. Fundamental hard problems

We now move on to the hard lattice problems. First, we give a few of the fundamental hard problems, which form a foundation for the hard problems that contemporary lattice cryptography is built on.

Definition 20 [Shortest Vector Problem (SVP)]. Define λ1(Λ) to be the length shortest nonzero vector in Λ. The Shortest Vector Problem(SVP) is that given a basis B of a lattice Λ, compute some vΛ such that ||v||=λ1(Λ).

Here, λm(Λ) are the successive minima of the lattice, where each vector ||vi||=λiλj for i<j, with λ1(Λ) being the shortest vector in the lattice. The norm function ||·|| is left intentionally unspecified though it is typically the Euclidean norm. This problem is regarded as hard in both classical and quantum settings, but it falters when applied to cryptographic schemes with some probabilistic element. It is more common to use the approximate analogue to the SVP, which is as follows:

Definition 21 [Approximate Shortest Vector Problem. SVPγ)] Given a basis B of a lattice Λ(B), find a nonzero vector vΛ such that ||v||γ(n)·λ1(Λ).

We also make mention of bounded distance decoding (BDD) that asks the user to find the closest lattice vector to a prescribed target point tΛ, which is promised to be “rather close” to the lattice.

Definition 22 [Bounded Distance Decoding (BDDγ)]. Given a basis B of a full-rank lattice Λ(B) and a target vector tn with a guarantee that dist(t,Λ)<d=λ1(Λ)/(2γ(n)), find unique lattice vector vΛ such that ||tv||<d.

The above problems have varying degrees of provable hardness. The exact version of the shortest vector problem is known to be NP-hard (non-deterministic polynomial-time hardness) under randomized reductions;78 however, the implementation of the hard problems such as bounded distance decoding relies on polynomial encoding, and so it is in the complexity class NPco-NP.79 Currently, there are no known quantum algorithms that solve any of the above problems in polynomial time, but there have been various attempts, see Sec. III C 5 for further details.

2. Foundations of contemporary lattice crypto

While the previous hard problems are fundamental to lattices, they are not easily implementable in lattice schemes. Here, we introduce the two problems that form the foundation for contemporary lattice cryptography: the short integer solution (SIS)69 and learning with errors (LWE).71 

Definition 23 [Short Integer Solution (SISn,qβ,m)]. Given m uniformally random vectors aiqn forming the columns of a matrix Aqn×m, find a nonzero integer vector zm of norm ||z||β such that

(24)

LWE is an average-case problem introduced by Regev, often referred to as the “encryption enabling” analogue of the SIS problem.

Definition 25 (LWE distribution). For a vector zqn called the secret, the LWE distribution Az,χ over qn×q is sampled by choosing aqn uniformly at random, choosing eχ, and outputting (a,b=z,a+emodq).

The problem comes in two distinct forms: decision and search. Decision requires distinguishing between LWE samples and uniformly random ones, whereas search requires finding a secret given LWE samples.

Definition 26 (Decision LWEn,q,χ,m). Given m independent samples (ai,bi)qn×q where every sample is distributed according to either,

  1. Az,χ for a uniformly random zqn (fixed for all samples).

  2. The uniform distribution, distinguish which is the case.

Definition 27 (Search LWEn,q,χ,m). Given m independent samples (ai,bi)qn×q drawn from Az,χ for a uniformly random zqn, find z.

The learning with error problems has been shown to be at least as hard as quantumly solving SVP on arbitrary n-dimensional lattices. The following security reduction can be shown following the proof from the study by Regev71:

Here, DGS stands for discrete Gaussian sampling. We note that the reduction between discrete Gaussian sampling and the BDD problem is a quantum step.

As previously mentioned, it is possible to construct lattices from specific algebraic structures such as rings or modules,80 when combined with the above problems, it is known as structured LWE or structured SIS. This is largely done for efficiency reasons as the parameter space needed to implement systems based on these structures is greatly reduced.73 The choice of which structure is worth using has some nuances, however. For example, ring LWE is generally considered to be more efficient than module LWE; however, the efficiency comes at a cost of flexibility and security. Increasing the security of a scheme requires increasing the dimension of the lattice, which, in the ring case, is often chosen such that N=n, where n=2k for some integer k. Thus, going up a security level requires going from dimensions 512 to 1024, for instance, whereas a more optimal scheme may lie in-between these. Module LWE has dimension parametrized by an integer d such that N=dn, again for n of the form 2k. Setting d =3 and n =256 allows for a total dimension of 768, which may be preferable for the targeted level of security. Even further structures can be imbued, which may yet give greater flexibility: middle-product LWE81 and cyclic LWE.82 The security of all these schemes based on the algebraic structure has been questioned however, as the reduction to standard LWE is not fully understood.83–85 

All the lattice-based NIST submissions for signature schemes use some kind of structure: qTESLA and FALCON are based on rings (FALCON, however, uses a distinction between binary and ternary forms to capture the intermediate security levels), whereas Dilithium is based on module LWE.

3. The GPV framework

Introduced in the seminal paper of Gentry et al.,86 the Gentry–Peikert–Vaikuntanathan (GPV) framework gives an overarching structure for taking advantage of “natural” trapdoors in lattices to obtain signatures. It is built on a signature scheme first introduced in Goldreich–Goldwasser–Halevi (GGH)87 and NTRUsign88 schemes:

  • The public key is a full-rank matrix Aqn×m generating a lattice Λ. The private key is a matrix Bqm×m generating Λq, the dual lattice of Λmodq.

  • Given a message m, a signature is a short value sqm such that sAT=H(m)=c, where H:{0,1}*qn is a known hash function. Given A, verifying s as a valid signature is straightforward: check s is short and that sAT=c.

  • Computing a signature requires more care, however:

    • Compute an arbitrary preimage c0qm such that c0AT=c. c0 is not required to be short, and so it can be computed with relative ease.

    • Use B to compute a vector vΛq close to c0. Then, s=c0v is a valid signature,
      (28)

      If c0 and v are close enough, then s will be short, fulfilling the second requirement of a valid signature.

The GGH and NTRUsign schemes, however, proved to be insecure as the method for computing vector vΛ leaked information about secret basis B and have since been proven to be insecure by cryptanalysis.89–91 The GPV framework (see Fig. 4 for a diagrammatic representation of this scheme) differs in that instead of the deterministic algorithm—Babai's roundoff algorithm—it uses Klein's algorithm,75 a randomized variant of the nearest plane algorithm also developed by Babai.92 Whereas both deterministic algorithms would leak information about the geometry of the lattice, Klein's algorithm avoids this by sampling from a spherical Gaussian over the shifted lattice c0+Λ.

Fig. 4.

A schematic of the GPV signature framework. Alice hashes the message to a lattice vector c=H(m). Following this, she creates the key pair (pk,sk)=(Aqn×m,Bqm×m) and sends the public key to Bob. Using elementary techniques, she computes the preimage vector (PC) c0A=c, and then using the secret key, she samples the vector (VS) vΛ such that it is very close to c0. The signature is s=c0v. Bob has to check that the signature is small enough and, using the public key, that sAT[=c0ATvAT]=H(m). Here, vAT=0 since A generates the lattice and v belongs to the dual lattice.

Fig. 4.

A schematic of the GPV signature framework. Alice hashes the message to a lattice vector c=H(m). Following this, she creates the key pair (pk,sk)=(Aqn×m,Bqm×m) and sends the public key to Bob. Using elementary techniques, she computes the preimage vector (PC) c0A=c, and then using the secret key, she samples the vector (VS) vΛ such that it is very close to c0. The signature is s=c0v. Bob has to check that the signature is small enough and, using the public key, that sAT[=c0ATvAT]=H(m). Here, vAT=0 since A generates the lattice and v belongs to the dual lattice.

Close modal

Klein's algorithm was the first of a family of lattice algorithms known as trapdoor samplers. The GPV framework has become a generic framework into which a choice of trapdoor sampler and lattice structure can be inserted. It has also been proven to be secure under the assumptions of SIS under the random oracle model.86,93

A prudent example of an instantiation of the GPV framework is the NIST submission FALCON.94 This uses a trapdoor sampler known as the Fast-Fourier-Sampler—developed by Prest and Ducas95—over NTRU lattices that take advantage of the ring structure.

4. Bai–Galbraith Signatures

The Bai–Galbraith signature scheme96 is an adaptation of an earlier work by Lyubashevesky97 in which he develops a lattice-based signature scheme using a paradigm which he calls “Fiat–Shamir-with-aborts.” Informally, it follows the chain of reductions:

Here, CRHF stands for collision-resistant hash function. The main idea is that, from a lattice-based CRHF, one can create a one-time signature following Lyubashevsky and Micciancio;98 however, this leaks information about the secret key. This would not be a problem for a one-time signature as the information becomes defunct after the usage. Constructing the ID scheme requires the repeated use of this, which is where the aborting technique comes in. In response to the challenge from the verifier, the prover can decide that sending the usual response would leak information and instead abort the protocol and restart it. The end result is a secure, if somewhat inefficient, ID scheme. Having to restart the protocol every time there is information leaked is done away with when adapting this to a signature scheme using Fiat–Shamir, however. The lack of interaction means that the prover can simply rerun the protocol until they find a signature, which does not leak information.

As an LWE instantiation, Lyubashevesky's scheme has public key (A,b=Az+emodq), where the components are picked as described above. The verifier picks small-normed vectors y1,y2 and computes v=Ay1+y2. With the message m, they compute the hash c:=H(v,m) and the following two vectors: s1=y1+zc and s1=y2+ec. The signature is then s=(s1,s2,c). Here, to ensure that neither s1 nor s2 leak information about the protocol, rejection sampling is employed. Developed in Refs. 99–101, this technique allows the vectors to be picked from a distribution independent of the secret. Verification requires checking that ||s1|| and ||s2|| are small enough and that H(As1+s2bcmodq,m)=c. This can be thought of as proof of knowledge of (z,e).

Bai and Galbraith adapted this such that it instead becomes proof of knowledge of only z using a variation of the verification equation and compression techniques. Once the public key has been created, only one vector is required, y, from which v=Aymodq. The least significant bits of v are then thrown away, and the remainder is hashed with the message m to get a hash value c. From this value, the vector c is created and the signature is (s=y+zc,c), with rejection sampling again being used to check that the distribution of z is independent of the secret. Computing w=AsbcAyecmodq allows for verification by checking that the hash value of the most significant bits of w with m is equal to c. See Fig. 5 for a schematic for the Bai–Galbraith signature scheme.

Fig. 5.

A schematic for Bai–Galbraith Signatures. Alice first generates the key pair (pk,sk)=((A,b=Az+emodq),z), sending the public key to Bob. After picking a short vector y, she finds the lattice vector v=Ay and removes the most significant bits to compute v=comp(v). She then hashes this with the message to create the vector c from hash value c=H(v,m). The signature is s=y+zc. Bob computes the vector w=Asbc and checks that the hash value of the most significant bits of w and the message m is consistent with c.

Fig. 5.

A schematic for Bai–Galbraith Signatures. Alice first generates the key pair (pk,sk)=((A,b=Az+emodq),z), sending the public key to Bob. After picking a short vector y, she finds the lattice vector v=Ay and removes the most significant bits to compute v=comp(v). She then hashes this with the message to create the vector c from hash value c=H(v,m). The signature is s=y+zc. Bob computes the vector w=Asbc and checks that the hash value of the most significant bits of w and the message m is consistent with c.

Close modal

The NIST submission CRYSTALS-Dilithium is based on Bai–Galbraith signatures over module-LWE.

5. Attacks

Similar to the multivariate case, there is a breadth of literature on specific attacks, both quantum and classical, and suffice to say we will not be going into them in too much depth. Here, we give a brief summary of some of the avenues that have been attempted and references for readers to investigate. Similar to attacks on Multivariate Cryptographic schemes, these can largely be broken down into three categories: attacks on the underlying hard problems, attacks on specific schemes, and side-channel attacks (see Sec. III E for details of side-channel attacks).

a. General structure attacks

The security of lattice cryptography can be largely reduced down to the shortest vector problem, and so the general motivation for these attacks is to solve the said problem. Schemes of this kind tend to come in two forms: algorithmic or sampling. Largely, the question to be answered is the following: given a random basis for a lattice, can one find the shortest vector? Or at least a small enough vector, satisfying the approximate SVP? On the algorithmic side, there are lattice reduction algorithms such as Schnorr102 or Lyu et al.,103 which attempt to find almost-orthogonal bases from highly nonorthogonal bases. In a similar vein, quantum speed ups of vector enumeration have been proposed by Aono et al.104 There are also search approaches such as the sieving algorithms105 and their quantum counterparts.106 Newer approaches, devised using adiabatic quantum computing, posing SVP as an energy minimization problem as in the study by Joseph et al.107,108 have also been developed. Similar to the use of sampling to generate small signatures, if a truly efficient discrete Gaussian sampler was developed, it could pose a major problem.109,110 Simply setting the center of the distribution as the zero vector would allow the shortest vector to be picked with a high probability. There have also been suggestions that certain quantum algorithms could be used to pick vectors more efficiently from these distributions.

As mentioned previously, the SVP can be shown to be equivalent to solving the Hidden Subgroup Problem for symmetric groups, which has also been the focus of much quantum cryptanalytic research.111 

b. Specific attacks

For details on how individual schemes are taking known attacks into account, we refer the reader to the design documents: Dilithium112 and FALCON.113 

Here, we give a brief overview of the remaining two NIST submissions, Picnic and SPHINCS+, both of which are based on symmetric primitives.

1. Picnic

Unlike the previously mentioned schemes, Picnic only requires the hardness provided by symmetric primitives such as hash functions and block ciphers.114 It is a general scheme for the adaptation of a three-move proof-of-knowledge scheme (known as Σ-protocols) to signature using a transformation from the study by Unruh.115 It is claimed by Unruh116 that the Fiat–Shamir paradigm for transforming proof-of-knowledge schemes into signatures is impractical to prove secure in the quantum random oracle model, and so Unruh provides an alternative. The Picnic protocol provides two signature schemes: one via Unruh and another using Fiat–Shamir.

Picnic is built upon a Σ-protocol called ZKB++,114 which itself is built on an earlier scheme called ZKBOO.117 For the sake of brevity, the details of Picnic and the underlying schemes are omitted, and instead, we will explain the underpinning framework by first explicitly defining Σ-protocols followed by Unruh's transformation.

Definition 29 (Σ-protocol). A three-move proof-of-knowledge protocol between a prover (Alice) and a verifier (Bob) is known as a Σ-protocol. Alice wants to prove that she knows x such that f(x)=y, where y is commonly known, for some relation f,

  1. Alice commits herself to randomness by picking r, which she sends to Bob.

  2. Bob replies with a random challenge c.

  3. Alice responds to the challenge with a newly computed t.

  4. Bob accepts that Alice has proven the knowledge if ϕ(y,r,c,t)=1 for some efficiently computable and agreed upon ϕ.

Unruh's transform takes a given Σ-protocol with a challenge space C, an integer N, message m, and a random permutation G and requires the following:

  1. Alice runs the first phase of the Σ-protocol N times to produce r1,,rN.

  2. For each i{1,,N} and for each jC, she computes the responses tij for ri and challenge j. She then computes gij=G(tij).

  3. Using a known hash function, she computes H(x,r1,,rN,g11,,gN|C|) to obtain indices J1,,JN.

  4. The signature she outputs is then s=(r1,,rN,T1J1,,TNJN,g11,,gN|C|).

Bob then verifies the hash, verifies that the given tiJi values match the corresponding giJi values, and that the tiJi values are valid responses with respect to the ri values.

While Picnic is based on the Σ-protocol ZKB++, there is some choice in the use of the symmetric primitives used in the construction. The choice has been implemented in the block cipher family LowMC,118 which is based on a substitution permutation network. Performance wise, Picnic is relatively slow and employs large signature sizes, but makes up for this with provable quantum security.

2. SPHINCS+

SPHINCS+119 is based on an earlier protocol called SPHINCS.120 This protocol is a hash-based, stateless signature scheme that had the goal of having the practical elements of other hash-based schemes and adding extra security by removing the stateful nature. A stateful algorithm depends in some way on a quantity called the state, which is initialized in some way. This is often a counter, though not necessarily, and stateful schemes can lead to many insecurities as they need to keep track of all produced signatures.

SPHINCS expands the idea of using a Merkle tree30 to extend a one-time signature into a many-time signature scheme by creating a hypertree. In this tree of trees, leaves of the initial Merkle tree become the root one-time signatures for further trees, which themselves cascade into further trees. The size of the overall tree becomes a compromise between security and efficiency in the original scheme: which leaves are picked to become the next tree are chosen randomly, and so a smaller tree has a chance to repeat the leaf choice. In order to abate this, a few-times signature is used at the bottom of the tree. This randomness is what makes SPHINCS stateless. Whereas Merkle's original design iterates over the signing keys, SPHINCS builds on the theoretical work of Goldreich,121 in which the keys are picked randomly. The size of the hypertree allows the assumption that the new key has not been used before.

Improving on the previous design, SPHINCS+ uses a more secure few-time signature at the bottom of the tree, known as Forest of Random Subsets (FORS), an improvement on a previous signature called HORST.120 A better selection algorithm for choosing the leaves of the tree is also included. It also introduces the idea of tweakable hash functions.

Definition 30 (Tweakable hash functions). Let α,n,P be the public parameter space and T be the tweak space. A tweakable hash function is an efficient function,
(31)
mapping an α-bit message m to an n-bit hash value message digest (MD) using a function key called public parameter PP and a tweak TT.

This allows the hash functions to be generalized to the whole hypertree as they can adapt to changes in the chosen leaves of the subtrees. In brief, the SPHINCS+ signature scheme can be summarized as follows:

  1. Alice generates p,q{0,1}n: p is a seed for the root of the top tree in the hypertree and q is a public seed. The pair (p,q) forms the public key. The secret key is the pair t,u{0,1}n, respectively, seeds for the few-time signature FORS and the chosen one-time signature for the protocol, WOTS+.119 

  2. To sign the message, Alice generates the hypertree and the signature is the following collection: a FORS signature on a message digest, a WOTS+ signature on the FORS public key, a series of authentication paths, and WOTS+ signatures to authenticate the WOTS+ public key.

  3. To verify this, Bob iteratively reconstructs the public keys and root nodes until the top of the SPHINCS+ hypertree is reached.

The SPHINCS+ protocol was not designed with the same kind of performance in mind as either the lattice or multivariate schemes. Generally, stateless signatures have much larger key and signature sizes, as well as slower performances. They mainly target applications that have low latency requirements but very strong security requirements, such as offline code signing. In the study by Bernstein et al.,119 the reader will find an analysis of the security of this scheme in both a classical and quantum setting that shows it to be very strong in both regards.

3. Attacks

Here, we include references for cryptanalysis efforts of the above schemes. For Picnic, the recent attacks include a multiattack on the scheme and its underlying zero-knowledge protocols,122 and an attack on the block cipher used in implementation, LowMC.123 Both these—as well as some side-channel attack analysis—are addressed in the design document.124 

Currently, the main attack against the SPHINCS framework is that of Castelnovi et al.125,126 This is a type of side-channel attack known as a differential fault attack. In the design document,127 general protection against this kind of attack—as well as other known general attacks—is addressed.

It would be remiss of this paper to give an overview of the state of contemporary cryptographic signatures—especially with regard to new standards—without a note on side-channel attacks. Side-channel attacks are cryptanalytic attacks that focus on finding flaws in the implementation of protocols rather than the design. Examples of this include timing attacks—where an adversary can gleam information about a secret from a protocol by taking advantage of a subroutine running in nonconstant time- and energy attacks—a similar process but instead requires examining the energy use of the protocols. This has led to some of the bigger breaks of security systems that have been employed (Ref. 128, p. 116). Unfortunately, in the case of Post-Quantum Cryptography, it is often a form of attack that has not been considered in as much depth as is potentially necessary and many of the submissions are missing a large-scale analysis of how they could be affected. Similar to the attacks previously mentioned, however, it is beyond the scope of this paper to go into a great amount of detail, and so instead, we provide a list of references for invested readers.

For a good overview of side-channel attacks in general, we refer the readers to the study by Fan et al.129 or Lo'ai et al.130 Beyond this, we direct the readers toward specific analysis of side channel attacks for certain NIST submissions.

There has been some work on general fault attacks on multivariate public key cryptosystems.131 Side-channel attacks on rainbow can be seen in the general attacks on unbalanced oil and vinegar schemes.132–134 

Dilithium was attacked using a side-channel assisted existential forgery attack.135 This was responded to with countermeasures suggested in the implementation that mask the protocol.136 FALCON has recently been attacked using a protocol known as BEARZ.137 The attacking party also suggested countermeasures to prevent this fault attack and timing attacks on FALCON.

For each submission, we also direct the readers to the respective design documents.52,112,113,124,127,138 Unfortunately, the level of details on each is not to an equal standard, with some severely lacking side-channel attack analysis.

When considering the performance of these different protocols, there are various angles to analyze them. At a top level view, one could compare a range of properties such as the key size, length of signatures, verification times, and signature creation times. NIST makes the point that these algorithms will be employed in a multitude of applications, each with different requirements. For example, if the applications can cache public keys or refrain from transmitting them frequently, then the size of the public key is not as important. Similarly, in terms of the computational efficiency, a server with high traffic spending a significant portion of its resources verifying client signatures will be more sensitive to slower key operations. The call for proposals49 even suggests that it may be necessary to standardize more than one algorithm to meet the differing needs.

While the computational efficiency relies on the specific architecture used, the key and signature sizes can be compared theoretically. A comparison of the NIST schemes for these architectures can be found in Table III. Many of the submissions include data on several variants of their respective schemes, but we have only included a cut down list here. The variants—as well as the original data—can be found in the submissions themselves.52–57 

Table III.

Comparison of the key and signature sizes of the NIST round 2 signature submissions. All sizes are given to the nearest byte.

SubmissionPK sizeSignature sizeSecurity level
GeMSS 128 3 521 88 32 
GeMSS 192 1 237 964 51 
GeMSS 256 3 040 700 72 
Rainbow Ia 148 500 32 
Rainbow IIIc 710 600 156 3/4 
Rainbow Vc 1 683 300 204 
Dilithium 1024 × 768 1184 2044 
Dilithium 1280 × 1024 1472 2701 
Dilithium 1760 × 1280 1760 3366 
FALCON-512 897 657 
FALCON-768 1441 993 2/3 
FALCON-1024 1793 1273 4/5 
Picnic-L1-UR 32 53 961 
Picnic-L3-UR 48 121 845 
Picnic-L5-UR 64 209 506 
SPHINCS+-128s 32 8080 
SPHINCS+-192s 48 17 064 
SPHINCS+-256s 64 29 792 
SubmissionPK sizeSignature sizeSecurity level
GeMSS 128 3 521 88 32 
GeMSS 192 1 237 964 51 
GeMSS 256 3 040 700 72 
Rainbow Ia 148 500 32 
Rainbow IIIc 710 600 156 3/4 
Rainbow Vc 1 683 300 204 
Dilithium 1024 × 768 1184 2044 
Dilithium 1280 × 1024 1472 2701 
Dilithium 1760 × 1280 1760 3366 
FALCON-512 897 657 
FALCON-768 1441 993 2/3 
FALCON-1024 1793 1273 4/5 
Picnic-L1-UR 32 53 961 
Picnic-L3-UR 48 121 845 
Picnic-L5-UR 64 209 506 
SPHINCS+-128s 32 8080 
SPHINCS+-192s 48 17 064 
SPHINCS+-256s 64 29 792 

The timings, however, do have a certain dependence on implementation and as such NIST has set out their requirements with respect to the NIST PQC reference platform:49 an Intel ×64 running Windows or Linux and supporting the GCC compiler. A comparison of the performance of the timings of the schemes can be found in Table IV. Unless noted otherwise, the submissions used the reference architecture.

Table IV.

Comparison of the signature creation and verification times of the NIST round 2 signature submissions. Unless stated otherwise, these are all to the nearest thousand processor cycles.

SubmissionKey genSigningVerification
GeMSS 128 38 500 750 000 82 
GeMSS 192 175 000 2 320 000 239 
GeMSS 256 532 000 3 640 000 566 
Rainbow Ia 35 000 402 155 
Rainbow IIIc 340 000 1700 1640 
Rainbow Vc 757 000 3640 239 
Dilithium 1024 × 768 243 1058 273 
Dilithium 1280 × 1024 371 1562 376 
Dilithium 1536 × 1280 471 1420 511 
FALCON 512 6.98a 6081.9b 37175.3c 
FALCON 768 12.69a 3547.9b 20637.7c 
FALCON 1024 19.64a 3072.5b 17697.4c 
Picnic-L1-UR 160 172 560 116 494 
Picnic-L3-UR 392 549 036 368 492 
Picnic-L5-UR 753 1 234 713 828 446 
SPHINCS+ 128s simple 326 805 4 868 849 5304 
SPHINCS+ 192s simple 486 773 10 259 965 7971 
SPHINCS+ 256s simple 636 421 7 570 079 10 866 
SubmissionKey genSigningVerification
GeMSS 128 38 500 750 000 82 
GeMSS 192 175 000 2 320 000 239 
GeMSS 256 532 000 3 640 000 566 
Rainbow Ia 35 000 402 155 
Rainbow IIIc 340 000 1700 1640 
Rainbow Vc 757 000 3640 239 
Dilithium 1024 × 768 243 1058 273 
Dilithium 1280 × 1024 371 1562 376 
Dilithium 1536 × 1280 471 1420 511 
FALCON 512 6.98a 6081.9b 37175.3c 
FALCON 768 12.69a 3547.9b 20637.7c 
FALCON 1024 19.64a 3072.5b 17697.4c 
Picnic-L1-UR 160 172 560 116 494 
Picnic-L3-UR 392 549 036 368 492 
Picnic-L5-UR 753 1 234 713 828 446 
SPHINCS+ 128s simple 326 805 4 868 849 5304 
SPHINCS+ 192s simple 486 773 10 259 965 7971 
SPHINCS+ 256s simple 636 421 7 570 079 10 866 
a

Milliseconds.

b

Signatures/second.

c

Verifications/second.

While progress in the field of quantum computing does pose a threat to our current digital security models and implementations of digital signature schemes, throughout this section, we have given a brief overview of the work being done to combat this direct threat. Working toward NIST's criteria for both security and efficiency ensures that sought-after solutions are implementable with current technology, well ahead of implementations of Shor's algorithm being practical. Where theoretical protocols have struggled to reach a compromise between security and efficiency, research has already yielded results to adapt, as shown by modifications of the UOV schemes and SPHINCS.

Indeed, this is the case in reality also, with Google implementing a lattice-based protocol in their Chrome web-browser (although this has since been removed in an effort to ‘not influence the standardization procedure’).139 However, we remain wary of further threats presented. For example, that a solution to the HSP problem could cause issues for lattice cryptography showcases the need for research to continue remaining ahead of the curve.

The National Institute for Standards and Technology's goal of implementing a new standard is predicted to still be at least a year off completion, and so it is far from finalized. It is certainly likely that the recommendation will be several new standards depending on the application. While there is a notion of simplicity leading to security, some of the above schemes eschew this in favor of ruthless efficiency (albeit with the necessity for careful implementation), with some even claiming to be faster than current protocols. Ultimately, the advances in cryptography in response to quantum computing appear to be leading to altogether more complicated systems, but for the sake of security, this is certainly the right move.

The security of classical digital signatures lies in creating problems that are infeasible to solve. The security of techniques based on quantum physics instead relies upon proven scientific principles.140 The uncertainty inherent in quantum physics has in recent years found great many uses in the field of security, ranging from random number generators to optical identity tags.141 This well-known phenomenon is what protects a system against attack, as in many cases, it is physically impossible for the attacker to breach the system without detection.

As with many classical digital signatures, quantum digital signatures rely upon a one-way function for their encryption. In this case, however, rather than using a mathematical one-way function, classical data are encoded as quantum information.142 In order to implement this, each quantum digital signature protocol follows three steps similar to those in classical cryptography:143 

  1. GEN: Alice uses her private key to generate a signature s consisting of quantum information.

  2. SIGN: Alice sends her message m to the recipients (denoted as Bob and Charlie) with the corresponding signature, denoted as (m, pk).

  3. VER: Bob and Charlie verify that the message is authentic and repudiation has not occurred. In the general case, this involves a comparison of (m, s) to the classical description of s.

In order to delve further into what each step entails, we will discuss a generic QDS (quantum digital signature) model, although schemes will vary in their specific implementation of these steps that most follow this general framework.142,143

The generation step begins with a purely classical operation, the random generation of a private key for each possible single bit message (1 or 0). This key, denoted as pki=(pk1i,pk2i,,pkLi), is purely classical information, where i =0, 1 to denote the message. Its length L is determined by the level of security required and the QDS scheme used. It is using this string that Alice will identify herself at a later stage, and so it is imperative that it is never shared.

The next stage of the generation step is done by first defining a set of nonorthogonal quantum states.143 An example of quantum states that can be used is the BB84 states.144 Alice then generates four separate strings of quantum information (known as quantum digital signatures) by encoding her pk strings using the defined quantum states. These four signatures consist of a copy of the encoded private key for both possible messages for both Bob and Charlie. These are denoted as qsBi,qsCi with the subscripts denoting who the signature pairs will be sent to. Alice then sends qsBi,qsCi to the correct recipient via a secure quantum channel. Bob and Charlie measure their quantum signature pairs to generate a classical signature from them, sBi,sCi.

Before proceeding to the next step, in most cases, Bob and Charlie randomly select approximately half of the elements in their measured signature (though this can occur before the measurement) and forward to the other. They do not have to exchange the same elements. As such to all extents and purposes from Alice's perspective, both sBi and sCi are exactly the same. This prevents her from committing repudiation. The exact method of this “symmetrization” is dependent on the QDS scheme.

To sign a message in most protocols, Alice sends her message of a bit of 1 or 0 with the corresponding private key to Bob,145 denoted as (mi,pki). As the protocols focus only on the signing of a message, it is assumed that the message is sent along a secure channel whether this is quantum or classical in nature.143 To send a multibit message, the process of generation and signing is iterated for each bit.146 

Finally, there is the verification stage. This varies greatly between each protocol (see the relevant protocol section for specific details). The general case is that Bob compares his measured signature sBi with the private key pki received from Alice. If the number of mismatches between the two is below the required threshold (see Sec. IV A 1), Bob deems the message as authentic. If Bob wishes to forward the message to Charlie, he sends (mi,pki) he received from Alice. Charlie then performs the same process with a different required threshold.

The security of quantum digital signatures is shown most prominently in the validation step. Each of the signing protocols relies on the same principles to provide security.143 It is key to this that the states chosen are nonorthogonal. Therefore, any measurement performed in one state will not commute with a measurement on the other.140 Thus, any measurement performed will probabilistically disturb the other state, effectively destroying the information it held.147 As such without knowing the initial private key, no one can discern the original classical input. Getting the correct result is entirely dependent on chance; even then, one would have no way to tell if it is the correct result.142 

This is reinforced by the Holevo bound placed on the information that can be obtained, and only single bits worth of information may be extracted from a qubit.148 This prevents further information to help the attacker's deductions from being obtained.142 If anyone attempted to forge a signature, they would have to guess the private key correctly based on the information they can gather. For short private keys, this is indeed improbable but still possible. As a signature gets longer, however, the chance of successfully guessing falls off exponentially to a negligible value for a long enough private key.142,143 As such, a simple comparison will reveal their ruse.149 Therefore, unlike classical digital signatures, QDS is not dependent on the difficulty of mathematical techniques and as such demonstrates information theoretical security.140 

1. Quantifying authentication

There are three possible levels to the degree of verification that Bob and Charlie can deem the message/signature combination has fulfilled:142 

  • 1-ACC (accept): Message is valid and can be transferred.

  • 0-ACC: Message is valid and might not be transferable.

  • REJ (reject): Message is invalid.

1-ACC and 0-ACC provide similar levels of security in the standards they uphold, and both require that the message is valid. As such, if Bob performs the validation test on the signature and finds it to be true, then the first condition of these security levels is fulfilled. The difference arises in the transferability of the signature. If Bob has any reason to believe that Charlie would not come to the same conclusion as him, then the message and signature would be deemed 0-ACC (i.e., repudiation has occurred). Only a message with the validation level of 1-ACC is deemed fully secure. Finally, if the signature that Bob receives is invalid, then the message is given the validation level REJ and is rejected.

These criteria are mathematically defined by a series of thresholds proposed by Gottesman and Chuang.142 We first define the probability of failure of an attacker pf, and this is the probability that their measurement will fail to get the correct classical description given a minimum error measurement. There is also the probability that an honest measurement will fail due to environmental factors such as noise, denoted as pe. Both these thresholds are calculated based on outside factors such as the number of copies of the signature circulated, the method with which measurements are taken, the apparatus used to distribute/measure the signatures, etc. Using these as the boundaries for acceptable error levels, with pf as the upper and pe as the lower, the allowed error thresholds can be set.142 For a signature to be authenticated, it must have a fractional error lower than sv, wherein pf > sv. Based on this, the upper bounds of the probability that the forger can effectively mimic a valid signature are given by ec(pfsv)2L,143 where c is a constant. This shows that the security against forging is dependent not only on the environmental factors (as pe and pf define the range in which sv can fall) but also on the length of the signature itself.143 

One threshold is not enough to protect against repudiation as stated by Gottesman and Chuang.142 If Alice was dishonest and sent different signatures to Bob and Charlie (with the aim to have Bob accept and Charlie reject it), she could successfully repudiate with only one threshold. With only one threshold (sv) for verifying the signature, Alice could tailor each signature to have svL errors. The number of errors is defined by a normal distribution with a mean of svL. As such, the probability of Bob accepting the message (errors below svL) and Charlie rejecting it (errors above svL) is 0.25.143 By introducing a second threshold sa, where sv > sa, Charlie must pass instead of sv, which reduces the probability of successful repudiation to negligible with a long enough signature. This is due to the fact that Alice would have to generate a signature that would give a result both below sv and above sa. In a similar manner to forgery, the upper bound on the probability for successful repudiation is now defined by ec(sasv)2L, where c is a constant.

This defines the necessary criteria for setting out mathematically how to achieve the verification conditions. To summarize the relative size of each of the thresholds, 0 <pe<sa<sv<pf. In order to achieve a 1-ACC level of verification, a signature's error fraction must be below sv for Bob and sa for Charlie. Falling below only sv would result in a 0-ACC rating, and falling below neither results in a REJ.

The first QDS protocol was proposed by Gottesman and Chuang in 2001142 and, hence, referred to as GC-QDS. As the precursor to all other QDS protocols as expected GC-QDS was only a theoretical proposal, however, it is important to analyze. It sets the standard for the “ideal” quantum digital signature protocol, one in which the signature remains as quantum information throughout the process. This ensures information theoretical security throughout.

As detailed in Sec. IV A, to begin with, Alice generates her random private keys, pki, for each possible message value. The initial proposal for this scheme suggested each private key element, pkni, as a two bit string and qsni the corresponding BB84 state.142 Using this, Alice generates copies of each quantum digital signature for each possible message value by encoding pki as quantum information. As many copies of the public key as there are, recipients are generated and one is distributed to each. The recipients, in this case Bob and Charlie, then do not measure the received signature and instead store it in stable quantum memory, a key difference to other protocols. This is the “generation” phase for this protocol.142 

For the signing phase, Alice only sends out classical information. She sends to Bob (mi,pki). As with most QDS protocols, this is assumed to be performed over a secure classical channel, an assumption that can be inexpensively implemented and so is not outlandish to assume.

Finally, there is the verification phase. Using pk and the known function for encoding classical information as quantum information, Bob generates his own set of quantum states. He then compares the states that he has generated to those he has stored from Alice using a SWAP test.142 If the number of mismatches between the two falls below the acceptance threshold sa, Bob accepts the message as authentic. For further verification, he can forward (mi,pki) to Charlie who will repeat the process with his threshold sv.

The unforgeability in this scheme stems from the strict policy of not measuring the quantum digital signature until authentication is required,143 as demonstrated in Fig. 6. For an attacker to forge a signature, they would, as in classical digital signatures, have to bypass the one-way nature of the classical to quantum encoding. Aside from obtaining the private key from Alice, the only way to achieve this is to intercept the quantum digital signature and correctly guess the measurement basis for each qubit. Thus, as previously discussed, for a long enough signature, there is a negligible probability that Bob and Charlie will not notice that the signature has been tampered with.149 Owing to the collapse of a quantum wavefunction upon measurement, the very act of attempting to intercept and forge a message will reveal that an attack has occurred as the signature remains only as quantum information. The only attack that can be achieved by interception is to cause the qs distribution phase to abort.142 

Fig. 6.

Flow diagram breaking down the process of Gottesman and Chuang's first proposal for a quantum digital signature protocol. Alice begins by randomly generating her private key (pk). From this, she generates her quantum signature (qs), which is sent on to Bob. Bob stores this in quantum memory without measuring it. To sign a message (m), Alice sends it alongside its corresponding pk to Bob. Bob uses pk to generate his own copy of the quantum signature qsb. He verifies its authenticity in a SWAP test with qs. Diamonds represent information that must be kept private (at least until sending), and circles represent information that is sent to another individual. In the case of this QDS scheme, the public key is quantum information, and all other information is classical.

Fig. 6.

Flow diagram breaking down the process of Gottesman and Chuang's first proposal for a quantum digital signature protocol. Alice begins by randomly generating her private key (pk). From this, she generates her quantum signature (qs), which is sent on to Bob. Bob stores this in quantum memory without measuring it. To sign a message (m), Alice sends it alongside its corresponding pk to Bob. Bob uses pk to generate his own copy of the quantum signature qsb. He verifies its authenticity in a SWAP test with qs. Diamonds represent information that must be kept private (at least until sending), and circles represent information that is sent to another individual. In the case of this QDS scheme, the public key is quantum information, and all other information is classical.

Close modal

The protection against repudiation lies with the comparative SWAP tests and relevant thresholds that Bob and Charlie apply to their stored public keys.142 As SWAP tests do not measure a quantum state and instead compare two to determine similarity, it makes them the perfect operation to enforce nonrepudiation. Coupling the nondestructive nature of the SWAP test with the thresholds detailed in Sec. IV A renders the probability of repudiation to negligible with a long enough signature.142 

This scheme is not without faults; however, its most prominent fault is its reliance on the immature technology of quantum memory. If the technology were perfected, it would allow for the indefinite storage of qs. As of time of writing, however, quantum memory cannot store quantum information for long time periods.150,151 In theory, this protocol can have long-term quantum digital signatures, but in practice, this is simply not possible. Although there have been recent advancements in the storage times of quantum memories, this protocol is currently infeasible.145,152

The SWAP tests themselves are an issue within the same vein as quantum memory, and the technology to perform them is not available. Each recipient would require a quantum computer in order to perform such a test. As with the quantum memory requirement, this ensures that Gottesman and Chuang's theoretical proposal remains theory.

The attractive concept of quantum digital signatures coupled with the initial proposal's reliance on quantum memory and computing leads to a great deal of interest in QDS from both a theoretical and experimental standpoint.140 The reliance on currently immature quantum technologies has led to new proposals that find methods of getting around this constraint. One of the earliest proposals was that of the multiport.153 This apparatus consists of a square array of four separate 50:50 beam splitters as shown in Sec. IV C.149 The apparatus and its potential in cryptography were first proposed in 2006147 but only as a theoretical method of public key distribution. Later, in 2012, it was adapted for use in quantum digital signatures and experimentally demonstrated.

For use in quantum digital signatures, the multiport is effectively split into two; the top two beam splitters belong to Bob and the bottom two to Charlie. The multiport in of itself primarily affects the generation stage of the quantum digital signature process.

The generation stage begins the same as other QDS schemes. Alice generates a randomized classical private key which she encodes with her chosen quantum basis. She then sends these to Bob and Charlie. She sends the copies of each quantum digital signature at the same time (i.e., qsB1 and qsC1 are sent out at the same time and then the other set of copies is sent).

The first set of beam splitters (moving from left to right in Fig. 7) is used by Bob and Charlie to split their copies of the signature into two equal amplitude components. One half of these is kept by Bob/Charlie, and the other half is sent to the other recipient. This ensures that Alice is unaware as to who has which bit in each of the copies that she initially sent.149,151 In the second set of beam splitters, the half that was originally kept by the receiver is mixed with the half from the other recipient in a comparison test. The process of this is detailed in Fig. 8.

Fig. 7.

Schematic diagram of the multiport setup used in the multiport QDS scheme. Each bisected diamond represents a beam splitter. The thick black lines at the top and bottom of the diagram represent mirrors. One half of the array is in the possession of Bob and the other in the possession of Charlie.

Fig. 7.

Schematic diagram of the multiport setup used in the multiport QDS scheme. Each bisected diamond represents a beam splitter. The thick black lines at the top and bottom of the diagram represent mirrors. One half of the array is in the possession of Bob and the other in the possession of Charlie.

Close modal
Fig. 8.

Schematic diagram of a single beam splitter. From the left enters two separate photon packets, |α and |β. These are combined by the splitter to give |α+β2 and |αβ2.

Fig. 8.

Schematic diagram of a single beam splitter. From the left enters two separate photon packets, |α and |β. These are combined by the splitter to give |α+β2 and |αβ2.

Close modal

In the initial implementation of this protocol, Bob and Charlie would store the received states in quantum memory ready for the verification stage. In 2014, Dunjko showed that it was indeed possible to remove the quantum memory from this protocol.149 Progressing instead to verification via comparison of classical data, the latter will be the focus of this section.

Rather than storing the quantum digital signature, the incoming signature is measured and the outputs of this measurement are stored. This classical string of results then forms an individual's measured signature. Most commonly, the measurements performed are quantum state elimination measurements (covered in further detail in Sec. IV D); however, unambiguous state discrimination was initially proposed.151 The key principle of either method is that it does not give a completely accurate description of Alice's initial private key. Thus, it does not enable a recipient to then forge a signature. The measurement and storage of this now classical signature complete the generation stage of the protocol.

To sign a message, Alice simply sends her single bit message alongside the relevant private key to Bob.151 Bob then compares the private key with the signature he measured, applying the sa threshold detailed in Sec. IV A 1 to determine the degree to which he trusts it. He then forwards the key and message to Charlie to validate that Alice indeed sent them both the same signature, and similarly, Charlie compares the two to see if the errors between them fall below sv. From these results, they determine the level of validity in the message and signature.

The security of this protocol lies with the square array of beam splitters.153 It is the secondary beam splitters that Bob and Charlie use to check the validity of the signature. One output of the beam splitters will be a mixed state of |α and |β. If Alice was honest, then |α=|β, and as both are identical and coherent, the initial input from Alice is obtained. If, however, Alice sent Bob and Charlie different signatures, the multiport will symmetrize these and prevent repudiation.153 The null ports then act as a safeguard against active forging.151 In this scenario, Bob is the dishonest party and attempts to forward a forged signature and message to Charlie. In this case, Charlie would measure a nonzero (assuming no background count) reading on his null port, informing him to the presence of a forged signature.

One of the key issues with this scheme is the loss present in this system, measured at 7.5 dB. The signature length L required for a security level of 0.01 % was 5.0×1013 for a half bit.151 The count rate with USE was found to be 2.0×105 counts per second. Given that this would yield a time required of 7.9 years to sign and send, this is clearly an impractical signature length. This is particularly troublesome when one considers this was not taken at a separation distance, 5m, great enough for use in a practical setting.151 Methods of improving upon this technique were proposed such as increasing the clock rate of the VCSEL used to generate the pulses. Due to the loss rates and impractical distance requirements, however, using multiports in quantum digital signatures serves only as a proof of concept for not requiring advanced quantum technologies.

The development of quantum digital signature schemes shows common themes, the reliance on quantum mechanics to achieve information theoretical security, and moving away from the reliance on immature quantum technologies. Multiports, while flawed, demonstrated that quantum memory and quantum computing is not necessary for QDS. The next clear step, as stated in the paper implementing multiports, is to develop a system that does not use them for security.151 The simplest way to achieve this is random forwarding.

As with all previous schemes, Alice begins by generating a string of classical bits that she keeps as her private key. She then encodes this in quantum states using nonorthogonal bases.154 Once again, two copies of each quantum digital signature are created and the copies of the same signature are sent to Bob and Charlie at the same time (arrows 1 and 2 in Fig. 9).

Fig. 9.

Representation of the communication channels between Alice, Bob, and Charlie in QDS schemes based on random forwarding. Blue arrows represent quantum communication channels and red classical channels. Single-headed arrows represent one-way communication channels, and double-headed arrows represent two-way channels.

Fig. 9.

Representation of the communication channels between Alice, Bob, and Charlie in QDS schemes based on random forwarding. Blue arrows represent quantum communication channels and red classical channels. Single-headed arrows represent one-way communication channels, and double-headed arrows represent two-way channels.

Close modal

Unique to this protocol is that upon receiving the quantum digital signature, Bob and Charlie randomly choose elements of the signature qsi to forward to the other, usually by a “coin toss” protocol,145,154 arrow 3 in Fig. 9. They then record the location in the string of those that were passed on and which elements were retained. If either receive less than L(12r) or more than L(12+r), where r is a threshold the two of them set, they abort.154 Thus, Bob and Charlie's final quantum digital signature will be a randomized mix, whose contents are defined only by the coin toss performed to dictate whether to keep an element. From the view point of Alice, therefore, once she has sent the messages, the reduced density matrices for Bob's and Charlie's quantum digital signature elements are identical, regardless of whether or not she tried to commit repudiation.154 

Bob and Charlie then measure their quantum signature copies to get their measured classical signature si. This is known as the pre-measurement approach.154 Alternatively, Bob and Charlie can measure the quantum digital signature elements before forwarding in a postmeasurement approach. In this case, there is no need for a quantum communication channel between them, reducing the system to only having the quantum channels between them and Alice. In either scheme, the technique of USE (Unambiguous State Elimination) is commonly used to measure qs.145 Whichever approach is applied, the random forwarding scheme is secure against forgery committed both by Bob and Charlie. In order to convince the other that a forged signature is valid, they would need to correctly guess the measurement results of the half of the others signature that they did not receive. For a long enough signature, the probability of this occurring is negligible.

To sign a message, Alice simply sends her private key concatenated with the message to Bob (arrow 4 in Fig. 9). Bob verifies the authenticity of the signature by comparing how many signature elements he correctly eliminated. If the error in this falls below sa, then the message is deemed authentic and he passes it along with the key to Charlie, thus protecting against the possibility of an outside attacker forging a signature. Charlie performs the same comparison with his stored classical signature and his error threshold sv. If it passes this, then the message is deemed valid. Repudiation has not occurred due to the symmetrized signatures and Bob has not committed a forgery as Charlie's signature has successfully been compared with Alice's.

The benefits of random forwarding are apparent in the reduced dependence on quantum technologies. Stripping back QDS protocols so, one is only using Quantum Key Distribution (QKD) to handle quantum information, which is far more practical than quantum memory or a multiport.145,155 The technology is more mature and has undergone a rigorous amount of field testing, thus allowing for easier integration of QDS into existing networks. In addition, using QKD-based quantum communication adds, in exactly the same manner for QKD, protection against message interception. Although this scheme would result in the sacrifice of some bits for Alice and Bob to compare, it would remove the risk of outside interception,156 giving the random forwarding protocol a wide range of applications and versatility. As such, many other schemes build on the primitive of random forwarding, either by developing more advanced hardware and measurement techniques157 or using it as the primitive for symmetrization in schemes such as QKD-based schemes proposed by Collins et al.,155 detailed in Sec. IV E.

Quantum cryptography as a field did not begin with the development of QDS, but instead with a far more developed technique, it is known as Quantum Key Distribution. This is a process by which, through the exchange of quantum information, Alice and Bob can generate a secret key for use in encrypted communication. By observing the error rates that occur in the measurement of the quantum information, it is possible to detect if an eavesdropper is present.140 Through this it can be confirmed if the exchanged key is indeed completely secret. If so, by using a one-time pad protocol, Alice and Bob can achieve completely secure encryption. This in itself could be used to generate a digital signature.154,156 As Alice and Bob would know that the signature could only be known by one of them, it would act as an identifier.

This principle in itself, however, is not of particular use. First, it does not allow for more than one recipient, a major drawback for a signing scheme. Second, it was shown that schemes based on partial QKD protocols could be expanded to multiple recipients and were more efficient in signing than secret key exchange via QKD.156 

The Quantum Key Distribution Key Generation Protocol (QKD KGP) differs from other quantum digital signature schemes in how the quantum information is distributed. It does, however, still follow the usual three-step process of generation, signing, and verification.

In the generation step, rather than Alice sending quantum information to Bob and Charlie, they send it to Alice.155 This is done to simplify security analysis as it means that Alice cannot send out entangled states. As with Alice, in other schemes, Bob and Charlie first generate their own individual random classical private key for both possible message values (pkB0,pkB1 and pkC0,pkC1), recording the basis that each element was encoded in. They encode this as quantum information using the chosen method (both BB84 states and phase encoding have been used155,156,158) thus forming two sets of separate quantum digital signatures qsB0,qsB1 and qsC0,qsC1.

Bob (Charlie) then performs a partial QKD protocol with Alice.156 He sends his quantum digital signatures to her, and Alice chooses a random basis, based on the encoding method used, to measure each element, resulting in Alice having two strings of classical elements from Bob (Charlie), sB0 and sB1 (sC0 and sC1). Bob (Charlie) then announces which basis each element was encoded in for each qs over a classical channel to Alice, who then “sifts” her signatures by discarding any that were not measured in the same basis. Bob (Charlie) discards the corresponding elements of his private key. Two further sections of the measured signatures and corresponding private keys are then sacrificed.156,158 At first, Alice and Bob (Charlie) determine the hamming distance between corresponding sections of their signature and private key. If they are sufficiently correlated (they need not be exactly the same), the process proceeds and those sections are discarded. Second, Alice and Bob (Charlie) compare corresponding sections in order to observe if the error that an eavesdropper who induces in the measurements is present. If not, these sections are discarded and the process continues. Each test is performed over a classical channel, and if either of these steps is not successful, the process repeats.

Alice will now have four sets of signatures, sB0,sB1,sC0, and sC1. As the error correction usually present in QKD protocols has not been performed, these measured signatures will not exactly match their private key counterparts. Bob and Charlie then randomly select half of their private keys and forward them on to the other over a secret classical channel. To all extents and purposes from the perspective of Alice, each of these keys is identical, as she does not know what has been forwarded.155 

To send a message, Alice then sends to Bob (mi,sBi,sCi). To verify the authenticity of this, Bob compares sBi with his corresponding private key pkBi and sCi with the half that he received from Charlie. If the fraction of mismatches in both is less than sa, then he deems them authentic. For further verification, he can forward (mi,sBi,sCi) to Charlie who will repeat the process with sv as his threshold.

The security of this scheme relies on the already proven security of QKD while adapting that pre-existing technology for use as a quantum digital signature.156 The key difference to other QDS schemes, that of two different quantum signatures for each message, improves the efficiency of the scheme over other QDS schemes and secret key sharing via QKD. Neither Bob nor Charlie can be dishonest and attempt to forge as they do not know the half of the other's private key that was not sent. This is opposed to other QDS schemes wherein a forger has access to the whole of the QDS. The different private keys remove the risk of colluding forgers who, in other schemes, would have had a copy of the QDS each to try and determine the correct measurement values from it. The only option for a forger is to eavesdrop, which can be determined from the sacrificed bits. as Also, with this, Alice cannot commit repudiation as she does not know who has which private key elements. Finally, this scheme's lack of the need for error correction and privacy amplification as in a full QKD protocol means that it is more tolerant to noise. As such, it can be implemented in QKD-based systems and used in a wider variety of scenarios.

The protocols detailed in this report have all focused on signing a single bit of data. The message is encoded into the quantum digital signature by having a signature for both possible message values. As of now, there is not a great calling to analyze multibit messages as the focus is on producing a practical single bit protocol. It is proposed that single bit signing protocols are expanded in a simple manner to sign a message of many bits.159 For each bit in the message as a whole, the signing process is iterated. Multiple different signature pairs would be sent to Bob and Charlie. When Alice sends her multibit message, she would send each bit with a valid private key string. Nonetheless, some papers have, however, raised concerns over this, citing that insufficient research has been performed in this area.146 As such, simply iterating the process may weaken the security of the protocol and not even be the most efficient way to sign the message.

1. Conflict over protocol iteration

The potential issues surrounding iterating a protocol were first raised by Wang et al.146 in which a multibit signing scheme was proposed that “tagged” the ends of a message. This was later returned to and improved upon.160 This work gained attention from other research groups who also saw an issue in single bit iteration. Techniques such as ghost imaging,159 quantum but commitment,161 and adaptations of Wang's initial technique162 have been proposed.

The argument for defining how a protocol handles messages longer than a single bit in length arises as the whole multibit message itself is not encoded anywhere in the signature. Only the value of a single bit is encoded. In a classical signature, this is the case and allows for the checking of message integrity. A demonstration of the issues of simply iterating a QDS protocol is that of selective attacks. Defining a message as a series of iterated single bits with corresponding private key strings such that the pair received by Bob (M,PKM) can be broken down as:146 

(32)
(33)

where m and pk represent the individual bit components of the multibit message and their corresponding private key strings, respectively. The corresponding set of quantum signature strings is, therefore, denoted by

(34)

where n gives the total bit length of the multibit message and || denotes the concatenating of subsequent components. Wang et al.146 argued that Bob could successfully forge a message by only selectively forwarding certain bit strings. The practical example used in Wang's paper146 was that M sent by Alice consisted of the bits to represent the phrase “Don't pay Bob $100.” As this message was produced by iterating a protocol for each bit, each bit of the message mi would have a corresponding private key string pki. Bob is now in possession of a signed message with a set of associated correct signatures. Wang purported that Bob may simply only forward the bits (and corresponding private key strings) that represent, for example, the message “Pay Bob $100.”146,160 As the private key string with each bit will successfully verify when Charlie checks against his stored quantum signature measurements, Bob has successfully forged a message from Alice.

This is not the only attack reported by Wang et al.146 that could be performed. If Bob has in his possession two separate message and private key sets, then he could separate out the message bits and “stitch” them together in a new order to forge a message. For example, if he receives two messages from Alice one stating Pay Bob $100 and another stating “I have $200,” Bob can choose to only forward the bits in the first part of the first message with the latter part of the second to give “Pay Bob $200.” As the bit forwarded would have a correct corresponding private key, Charlie would verify this message as correct.

These would clearly breach the security of a quantum digital signature protocol. The unbreakable security that is derived from the quantum mechanical effects inherent to the system is let down by an issue in how the protocol is implemented. Despite the minimal further research to support Wang's claim, the issue raised by it is still valid. Although the examples mentioned were simplistic, this could have serious repercussions in real-life applications. For example, if Alice had sent Bob a contract, then he could choose not to send the bits related to sections of the contract he did not approve of to Charlie. Furthermore, these attacks need not even be performed by Bob, if a malicious third party, Eve, were to intercept (M,PKM); they could also commit acts of forgery.

It is clear that work is needed in expanding protocols to consider how to sign multibit messages safely. Although this work is likely not a priority for many research groups until the rapid secure signing of a single bit is developed, it is necessary for the full practical implementation of QDS.

2. “End tagging”

A naive solution to this would be to record the sequence in which the signatures are sent and label the message/private key combinations with the corresponding number.162 This would not prevent Bob from omitting information at the end of a message, however (unless the number of signatures sent exactly matched the number of message bits), or prevent him from “stitching” together messages. The latter would only make more difficult as he could take the part of one message (say the first five bits of a ten bit message) and “stitch” it together with another (the last five bits of the second message).

To solve these issues, Wang et al.146 proposed a protocol to be used as a primitive in an overall multibit signing protocol. Any of the single bit signing schemes detailed in this report can be used to sign the individual bits, and the end tagging process determines how these should be iterated to form a multibit message.

In keeping with the notation for a multibit message given in Sec. IV F 1, the first step in the proposed method is to create a “sufficiently large”160 number of private key strings. Each of these is labeled as corresponding to a 0 or 1 message bit (in the same manner as with single bit signing protocols) and also sequentially numbered. The method of encoding as quantum information is independent to the rest of the protocol, as per single bit QDS protocols, a copy of each signature is generated for each recipient. The distribution of the set of quantum signatures S to said recipients is unaffected by this multibit expansion. As such, any QDS generation and distribution scheme can be used, making this protocol simple to use to extend existing schemes. The recipients then measure each of these to create their own set of signature strings.

Where Wang's proposal tackles the issues of multibit encoding is in the encoding of the message. The following steps are taken to encode M:146 

  1. Encode any bit with the value of 0 as 00.

  2. Encode any bit with the value of 1 as 01.

  3. Add the codeword 11 to the start and the end of the message.

This results in an output message M̂ that has 2n=4 elements compared to M's n total elements. To sign a message, each bit in the message is assigned a private key string depending on its bit value and on its location in the message (e.g., the first bit will be assigned the first of the private key strings). Alice then sends to Bob the combination of information denoted as (M,PKM̂,l), where the message before encoding is denoted as M, the private key strings for each bit in the encoded message is denoted as PKM̂, and l is the sequence number of the first key.

Bob then converts M to M̂ in the same manner as Alice did. He then applies the authentication method associated with the method of encoding used in each measured string si present in the set S. If each string passes authentication, then he knows that the message is from Alice and that it has not been tampered with. For secondary verification, Bob can forward (M,PKM̂,l) on to Charlie for him to authenticate.

The security of the signing of each individual bit is already well established (see Secs. IV A–IV E) and so does not need to be discussed further here. The end tagging and codewords are what enables this protocol to prevent parts of valid message/signature pairs from being used to create forged messages as described in Sec. IV F 1. In the first attack described, Bob forged a message by not forwarding the whole message to Charlie, thus changing the meaning of the message. However, as each message bit would have a correct associated private key, Charlie would see this as authentic. The end tagging of each encoded message with the bits 11 prevents this. If Charlie receives a signature set that does not begin and end with two signatures representing 1, then he knows that it has been altered. As the bits in the message are encoded to 00 or 01, there is no place the message/signature can be “cut” in order to produce the required 11 needed to mark the beginning and end of a valid message. As the bits are numbered and labeled as to what bit they represent, the order of them cannot be changed to achieve this either.

3. Quantum temporal ghost imaging

Quantum Temporal Ghost Imaging (QTGI) was developed as a method of speeding up the signing process of a multibit message while improving resistance to selective attacks. Demonstrated in the initial proposal was the signing of 10 bits of classical information with a single quantum signature.159 Ghost imaging is the principle in which a single image is created from the output of two separate detectors.163 In the classical sense, two coherent beams are used in the detection process. For QTGI, two energy-time entangled photons are used instead.

Key to the principle of QTGI is the three layers encoding within the measurements of the entangled photons.159 The entire time that the measurements were run for is split into a series of “frames.” Each frame is then split into a number of “slots” and each slot into four “bins.” The size of each is fixed and determined prior to the measurements. The bit size of the message that can be encoded is determined by the chosen slot size.

Alice begins by creating an energy-time entangled photon pair, sending one of the photons to Bob (Charlie) and keeping the other. Alice then passes her photon through an intensity modulator whose period is the same length as a frame followed by a low-resolution single photon detector. The bit pattern being sent via ghost imaging is the binary pattern of the intensity modulator.

As Alice only has a low-resolution SPD, she can only detect the frame in which her photon was measured. Bob (Charlie), however, possesses a high-resolution SPD. Thus, he detects which bin the photons arrive in but does not know which frames contain photons sent by Alice. Thus, without measuring both photons in the entangled pair, no one can fully replicate the temporal image of the binary pattern, keeping it secure.

Alice then publicly announces which frames her photons were measured in. Using this, Bob (Charlie) can recreate the binary pattern created by Alice's modulator without that explicit information having ever been sent. By then sacrificing part of their records, Alice and Bob (Charlie) can then determine the presence of an eavesdropper in a manner similar to that performed in QKD protocols. Bob and Charlie then randomly exchange half of their records to prevent repudiation from Alice.

To sign a message of N bit length, Alice then sends to Bob the frame number(s) of the binary string that represents the chosen message.159 From his own records, Bob can then verify the message, forwarding to Charlie if he requires further verification.

Thus, through the recreation of the temporal image of the intensity modulator pattern via QTGI, a multibit message can be signed with a single signature.159 

Although the quantum digital signature schemes discussed so far are in theory completely secure, many of them have limitations. For example, assumptions made in theoretical models or even laboratory experiments that are required to simplify the problem are detrimental in practical implementations. Assumptions made regarding channel security allow for focus on just malicious actions by just Alice, Bob, or Charlie. In reality, it will not be the case that only one of those three is an attacker. Other considerations include the practical implementation of such schemes and what side channel attacks can be performed on them. These simplifications are necessary in order to create the theoretical models required, but in order for quantum digital signatures to become a viable technology, they must be built upon.

1. Insecure channels

With the exception of the QKD-based scheme (see Sec. IV E), each of the schemes so far has made the assumption that quantum information is sent over an authenticated quantum channel.164 Namely, that the information sent is always the same as the information received. This simplifies analysis as it ensures that there is no “Eve” (forger intercepting the information). While the same protections against Bob(Charlie) forging would still apply to Eve, there is no way for the recipients to detect that an interception has occurred until the verification step, at which point the individual who received the legitimate signature would disagree with the one that did not.

While there are (costly) methods of implementing an authenticated quantum channel,143 this need not always be necessary. First of all, we can take a leaf from the book of QKD. As discussed in Sec. IV E, Alice and Bob(Charlie) can sacrifice a section of the received QDS to ensure that they are sufficiently correlated. If the expected level of error (based on the authentication threshold) is present in this, then they can be sure that no eavesdropping has occurred. If it is greater than this, however, they know that Eve is present and can restart the process with a different channel.156 

The QKD-based scheme gives us other methods that can be used in order to bypass the issue of insecure channels for any scheme. In particular, that of two separate quantum digital signatures is sent from Bob and Charlie to Alice.155 Assuming that Eve does not intercept both the quantum channels, then she does not have access to the whole quantum digital signature. This does not allow for the detection of Eve before the verification step but ensures that as long as she does not have access to at least one quantum channel, she cannot commit a forgery.

Finally, as proposed, the initial concept for the QKD-based QDS scheme is the principle of sending decoy states.164 A scheme that includes decoy states proceeds with the generation of the two copies of a potential message's quantum signature, but before the messages are dispatched, they pass through a separate amplitude modulator. This randomly and independently changes the intensity of the signature element pulse to one of the three possible values, μ, ν, or 0, each with their own defined probability distributions. Only the state in which both Bob and Charlie receive μ intensity states is deemed the signal state, and the other 6 possible combinations are decoy states. Alice then announces the intensity of each pulse allowing Bob and Charlie to discard any states that were decoys. To any attacker, however, there would be no method of determining which states are decoys and which are not, thus circumventing the security concerns surrounding the use of insecure quantum channels.

2. Measurement device independent

As is the case with many concepts in cryptography, a scheme itself may be secure, but in its implementation, weaknesses may be found and exploited. These are known as side channel attacks. In classical cryptography, an example of this would be analyzing the power output of a CPU during encryption in order to determine further information about the process used. As such, in theoretical models and laboratory tests, these are often not considered. Side channel attacks are, in fact, common in the field of quantum cryptography.140,143 As there is no way to breach the encoding of the information itself, an attacker must look for exploits elsewhere.

Measurement Device Independent (MDI) schemes bypass these issues by having all quantum communications occur via an untrusted central relay, Eve,165 as shown in Fig. 10. As such, none of the other members of the communication actually perform any measurements. They no longer need to be concerned with detector-based side channel attacks such as detector blinding as the relay is treated as a “black box.”

Fig. 10.

Representation of the communication channels between Alice, Bob, Charlie, and the central relay Eve in MDI QDS schemes. Blue arrows represent the quantum communication channels, and red represent the classical channels. Single-headed arrows represent one-way communication channels, and double-headed arrows represent two-way channels.

Fig. 10.

Representation of the communication channels between Alice, Bob, Charlie, and the central relay Eve in MDI QDS schemes. Blue arrows represent the quantum communication channels, and red represent the classical channels. Single-headed arrows represent one-way communication channels, and double-headed arrows represent two-way channels.

Close modal

To do this, Alice performs the QKD KGP for the QDS technique described in Sec. IV E with Eve. For each state sent, Alice applies random intensity modulation to create a series of decoy states and one signal state (as described in Sec. IV G 1). This is known as a Measurement Device Independent Key Generation Protocol (MDI KGP) and is based on MDI QKD protocols.152,158

Eve announces the results of each measurement and their intensity over a public channel. Alice and Bob(Charlie) then communicate over an authenticated classical channel which intensity state was the signal state as well as which basis was used,165 thus generating a signature without direct quantum communication. Bob and Charlie symmetrize their signature strings, and the scheme proceeds to the messaging and verification stages.

3. Expanding to multiple parties

Each scheme that has been discussed has focused on at maximum three parties communicating, a sender of the message (Alice) and two recipients (Bob and Charlie). In a practical communication scenario, this obviously will not be practical.152 In the case of sending a mass message authenticated with a quantum digital signature, there will be more than two recipients for the same signature.

However, this raises two concerns. The first is a practical one; in the schemes described above for a practical quantum communication network, each pair of users will require a quantum communication channel, scaling as N(N1)/2 links for N users.152 As N increases, this number becomes less and less practical to implement. A solution to this issue could be the network architecture discussed in Sec. IV G 2. Rather than each pair of members of the communication network having a quantum communication channel between each other, they instead each have one quantum channel with an untrusted central relay. This reduces the scaling of the number of required channels to N − 1 for an N mode network.

The second issue is a security concern. If Alice sends out the same quantum digital signature to each recipient, then, if there are more than two recipients, for N recipients, up to a maximum of N − 1 of them can collude to attempt to commit forgery against the others. Due to the uncertain nature of the measurements of the quantum signature, no single recipient will have a fully correct signature. However, by working together, multiple malicious parties can work together to improve their chances of a successful forgery. This issue was first addressed in Ref. 167 where a generic case of a multiparty scheme was first proposed. Reference 168 further expanded upon this concept.

Using the decoy state KGP scheme as a basis (see Sec. IV G 1), they expand this from only having two recipients to a general case of N recipients.167 To achieve this, each recipient generates their own private key for each possible message. They generate a quantum signature for each and distribute to Alice using the process detailed in Sec. IV G 1. Each recipient then randomly chooses half of the bits in their private key and forwards it to each other recipient, resulting in the final private key for each being (N1)L/2. As such from the viewpoint of Alice, each private key is exactly the same. Each recipient as well will never know all the private keys as each other recipient kept half of theirs. Therefore, even if N − 1 recipients colluded, they could not successfully forge a signature from Alice. The proposal also discusses the implementation of a system of security levels to quantify how often a signature can be forwarded and remain safe (as this will affect the authentication threshold).167 On top of this is a majority voting protocol for resolving disputes, thus fully outlining the protocols required to generalize a QDS protocol to any number of recipients.

4. Arbitrated quantum signatures

Signature schemes can be split into two different varieties, true and arbitrated (as discussed in Sec. II). The QDS schemes discussed so far in this paper fall within the category of the former. Arbitrated digital signatures, however, require a third party (who does not have to be a trusted party) for the verification of any signature. Due to this key difference, arbitrated schemes allow for the resolution of potential conflicts via the impartial arbitrator. Through the work of initial Zeng and Keitel, this concept was expanded to creature Arbitrated Quantum Signatures (AQS).169 

In the generation stage of the initial AQS scheme, Alice and Bob each generate a private key that they share with only the arbitrator (dubbed KA and KB, respectively). The arbitrator then generates a set of GHZ (Greenberger–Horne–Zeilinger) state entangled particles,169 keeping one of the sets for themselves and sending one to each Alice and Bob.

In the signing phase, Alice entangles the quantum state representation of the message she wishes to sign (|P) with her GHZ particle. She then measures her particle and records the result (MA). Alice then proceeds to encrypt her message |P using KA to create |R. To sign her message, Alice then sends it alongside |QS=KA(MA,|R) to Bob.170 

In the verification stage, Bob cannot verify the authenticity of the message himself as he does not possess KA and so cannot decrypt |QS.169 Instead, he encrypts it with his own key, sending this alongside the measurement results of his own GHZ particle and his copy of the message to the arbitrator. This is denoted as yB=KB(MB,|P,|QS).

Possessing both the private keys, the arbitrator can decipher both yB and QS.170 Using their knowledge of KA and the copy of |P received from Bob, the arbitrator creates their own copy of |R, known as |R. If |R=|R, then the signature is successfully verified by the arbitrator. The arbitrator then passes this information, alongside the measurements of their GHZ state (MT) to Bob.

Bob can then provide further verification by using MA, MB, and MT to generate his own copy of |P to check against the one he received from Alice.170 

This scheme prevents forgery as any information that is sent to another does not reveal either of the private keys.169 Thus, if Alice and Bob securely communicated these with the arbitrator via QKD, prevent any attacker from successfully forging a signature. Also, this message forgery cannot be successfully achieved as there is no way of affecting the entangled particles used.169 

AQS schemes have been further developed since their initial inception. The concept has been altered to allow variations such as message recovery,170 the type of entangled states used changed for Bell states,171 and the addition of publicly declared information.172 

The field of quantum digital signatures is a relatively new area of research, but in the time it has existed, it has developed from a purely theoretical concept into (albeit limited) practical implementation over 100 km of fiber.166 Arguably, the most important advance that allowed for feats such as this is that of random forwarding. While simplistic in design, this has allowed for the reduction of the quantum technology required in QDS down to simply that used in the already well-tested QKD. This paved the way for future schemes, which improved upon the basic random forwarding scheme detailed in Sec. IV D (Table V). This was achieved either with better hardware or by using it as a primitive for new concepts such as the QKD KGP scheme, which at the moment boasts one of the quickest signing rates over the longest distance.

Table V.

Summary of the figures of merit of signature schemes referenced in this review. Only experimental results and not theoretical estimations of proposed schemes are included. “Distance” refers to the distance between Alice and Bob or Charlie. “Signature Length” to the bit length of the signature required to sign a single bit message.

AuthorScheme summaryDistance (km)Signature lengthTime to sign (s)Clock rate (Hz)Security level
Collins et al.151  Multiport with USE 0.005 5.1 ×1013  100 × 106 10–2 
Collins et al.155  DPS (differential phase shift) QKD 90 2502  109 10–4 
Donaldson et al.145  USE based postmeasurement random forwarding 0.5 1.93 × 109 20  10–2 
Croal et al.157  Hetrodyne Measurements 1.6 7 × 104  2.2× 106 10–2 
Yin et al.166  Decoy state 102 2.5 × 1012 33 420  10–5 
Roberts et al.152  MDI-QKD 25 103 336 36   
Yin et al.158  MDI-QDS (MDI-KGP)  787 468   10–7 
Yao et al.159  Temporal ghost imaging  93.9 (signs 10 message bits at a time)  10–4 
AuthorScheme summaryDistance (km)Signature lengthTime to sign (s)Clock rate (Hz)Security level
Collins et al.151  Multiport with USE 0.005 5.1 ×1013  100 × 106 10–2 
Collins et al.155  DPS (differential phase shift) QKD 90 2502  109 10–4 
Donaldson et al.145  USE based postmeasurement random forwarding 0.5 1.93 × 109 20  10–2 
Croal et al.157  Hetrodyne Measurements 1.6 7 × 104  2.2× 106 10–2 
Yin et al.166  Decoy state 102 2.5 × 1012 33 420  10–5 
Roberts et al.152  MDI-QKD 25 103 336 36   
Yin et al.158  MDI-QDS (MDI-KGP)  787 468   10–7 
Yao et al.159  Temporal ghost imaging  93.9 (signs 10 message bits at a time)  10–4 

Quantum digital signatures, however, are by no means ready for full commercial use. No current scheme fully solves all the issues that would allow for QDS to move into broader practical use. For a scheme to be viable for use, it must not rely on the assumptions stated in many papers to work. It must be able to work with insecure channels, be immune to side channel attacks, and allow for both multiple bits and multiple users securely. The MDI KGP scheme detailed in Sec. IV G 3 (with the recipients sending the signatures to Alice) currently comes the closest to achieving this. It is secure against forging and repudiation and if adapted to include the multibit signing techniques detailed in Sec. IV F would allow for many bit messages to be sent to many recipients. No one has yet, however, attempted to practically implement such a scheme.

Signatures play a vital role in the security and trustworthiness of communications, and moving forward, there are valid concerns about the long-term reliability of the digital signature schemes that have been widely adopted. Solutions basing their security on quantum mechanics, rather than complex mathematics, are appealing but are far from commercial readiness, and post-quantum digital signatures form an optimal middle ground while quantum technologies mature.

R.J.Y. acknowledges support from the Royal Society through a University Research Fellowship (No. UF160721). This material was supported by the Air Force Office of Scientific Research under Award No. FA9550–16-1–0276. This work was also supported by grants from The Engineering and Physical Sciences Research Council in the UK (Nos. EP/K50421X/1 and EP/L01548X/1).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1.
J.
Norman
,
The Earliest Autograph Signatures
, https://www.historyofinformation.com/detail.php?entryid=2614.
2.
W.
Diffie
and
M.
Hellman
,
IEEE Trans. Inf. Theory
22
,
644
(
1976
).
3.
Abelian Foundation,
Anonymous Digital Signatures and Their Application in Cryptocurrency
(
Abelian Foundation
,
2019
), https://medium.com/abelian/anonymous-digital-signatures-and-their-application-in-cryptocurrency-4e20d9625aa7.
4.
W.
Diffie
,
Proc. IEEE
76
,
560
(
1988
).
5.
D.
Clarke
and
T.
Martens
, “
E-voting in Estonia
,”
Real-World Electronic Voting: Design, Analysis and Deployment
(
CRC Press
,
Boca Raton, FL
,
2016
), pp.
129
141
.
6.
R. L.
Rivest
,
A.
Shamir
, and
L.
Adleman
,
Commun. ACM
21
,
120
(
1978
).
7.
T.
ElGamal
,
IEEE Trans. Inf. Theory
31
,
469
(
1985
).
8.
M. O.
Rabin
, Technical Report No. MIL/LCS/TR212 (Massachusetts Institute of Technology, Cambridge Lab for Computer Science,
1979
).
9.
A.
Fiat
and
A.
Shamir
,
Conference on the Theory and Application of Cryptographic Techniques
(
Springer
,
Berlin, Heidelberg
,
1986
), pp.
186
194
.
10.
S.
Goldwasser
,
S.
Micali
, and
R. L.
Rivest
,
SIAM J. Comput.
17
,
281
308
(
1988
).
11.
S.
Goldwasser
,
S.
Micali
, and
R. L.
Rivest
,
Workshop on the Theory and Application of Cryptographic Techniques
(
Springer
,
Berlin
,
1984
), pp.
467
467
.
12.
S.
Wright
,
Quadratic Residues and Non-Residues
(
Springer International Publishing
,
Switzerland
,
2016
).
13.
I. B.
Damgård
,
Workshop on the Theory and Application of Cryptographic Techniques
(
Springer
,
Berlin
,
1987
), pp.
203–216
.
14.
That this seems slightly paradoxical was certainly not lost on them, as seen by the title of their paper.
15.
W.
Penard
and
T.
van Werkhoven
, “
On the secure hash algorithm family
,”
Cryptography in Context
(
Wiley
,
Newyork
,
2008
), pp.
1
18
.
16.
R.
Rivest
and
S.
Dusse
,
The md5 Message-Digest Algorithm
(
MIT Laboratory for Computer Science
,
Cambridge
,
1992
).
17.
H.
Dobbertin
,
A.
Bosselaers
, and
B.
Preneel
,
International Workshop on Fast Software Encryption
(
Springer
,
Berlin
,
1996
), pp.
71
82
.
18.
M.
Bellare
and
P.
Rogaway
,
International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
1996
), pp.
399
416
.
19.
J.
Jonsson
and
B.
Kaliski
,
Public-Key Cryptography Standards (PKCS) #1: RSA Cryptography Specifications Version 2.1
(RFC Editor, Fremont, CA,
2003
).
20.
S.
Goldwasser
,
S.
Micali
, and
C.
Rackoff
,
SIAM J. Comput.
18
,
186
(
1989
).
21.
H.
Delfs
and
H.
Knebl
,
Introduction to Cryptography: Principles and Applications
(
Springer
,
Berlin
,
2015
).
22.
Corporate NIST,
Commun. ACM
35
,
36
(
1992
).
23.
R.
Gibson
,
Polit. Sci. Q.
116
,
561
(
2001
).
24.
J.
Kitcat
and
I.
Brown
,
Parliamentary Aff.
61
,
380
(
2008
).
25.
T.
Fujiwara
,
Econometrica
83
,
423
(
2015
).
26.
S.
Wolchok
,
E.
Wustrow
,
J. A.
Halderman
,
H. K.
Prasad
,
A.
Kankipati
,
S. K.
Sakhamuri
,
V.
Yagati
, and
R.
Gonggrijp
,
Proceedings of the 17th ACM Conference on Computer and Communications Security
(
Association for Computing Machinery
,
New York, NY, USA
,
2010
).
27.
N.
Koblitz
,
Math. Comput.
48
,
203
(
1987
).
28.
D.
Johnson
,
A.
Menezes
, and
S.
Vanstone
,
Int. J. Inf. Secur.
1
,
36
(
2001
).
29.
S.
Blake-Wilson
,
N.
Bolyard
,
V.
Gupta
,
C.
Hawk
,
B.
Moeller
, and
R.-U.
Bochum
,
RFC4492: Elliptic Curve Cryptography (ECC) Cipher Suites for Transport Layer Security (TLS)
(
RFC Editor
,
Fremont, CA
,
2006
).
30.
R. C.
Merkle
,
Conference on the Theory and Application of Cryptology
(
Springer
,
Berlin
,
1989
), pp.
218
238
.
31.
G. J.
Simmons
,
Contemporary Cryptology: The Science of Information Integrity
(
IEEE
,
New York
,
1994
).
32.
D.
Deutsch
,
Proc. R. Soc. London. A
400
,
97
(
1985
).
33.
P. W.
Shor
,
Proceedings 35th Annual Symposium on Foundations of Computer Science
,
Santa Fe, NM, USA
(
1994
), pp.
124
-
134
.
34.
F. X.
Lin
,
Shor's Algorithm and the Quantum Fourier Transform
(
McGill University
,
Montreal
,
2014
).
35.
C.
Pomerance
,
Biscuits Number Theory
85
,
175
(
2008
).
36.
P. W.
Shor
,
SIAM Rev.
41
,
303
(
1999
).
37.
R.
Jozsa
,
Comput. Sci. Eng.
3
,
34
(
2001
).
38.
M.
Grigni
,
L.
Schulman
,
M.
Vazirani
, and
U.
Vazirani
, “,”
Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing
(
2001
), pp.
68
74
.
39.
C.
Lomont
, e-print arXiv:quant-ph/0411037 (
2004
).
40.
O.
Regev
, CoRR, cs.DS/0304005 (
2003
).
41.
L. K.
Grover
,
Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing
(
1996
), pp.
212
219
.
42.
G.
Brassard
,
P.
Høyer
, and
A.
Tapp
,
Latin American Symposium on Theoretical Informatics
(
Springer
,
Berlin
,
1998
), pp.
163
169
.
43.
D. J.
Bernstein
,
N.
Heninger
,
P.
Lou
, and
L.
Valenta
,
International Workshop on Post-Quantum Cryptography
(
Springer
,
Berlin
,
2017
), pp.
311
329
.
44.
D. J.
Bernstein
, “
Introduction to post-quantum cryptography
,”
Post-Quantum Cryptography
(
Springer
,
Berlin
,
2009
), pp.
1
14
.
45.
S. P.
Jordan
and
Y.-K.
Liu
,
IEEE Secur. Privacy
16
,
14
(
2018
).
46.
E.
Martín-López
,
A.
Laing
,
T.
Lawson
,
R.
Alvarez
,
X.-Q.
Zhou
, and
J. L.
O'Brien
,
Nat. Photonics
6
,
773
(
2012
).
47.
Z.
Li
,
N. S.
Dattani
,
X.
Chen
,
X.
Liu
,
H.
Wang
,
R.
Tanburn
,
H.
Chen
,
X.
Peng
, and
J.
Du
, e-print arXiv:1706.08061, [quant-ph] (
2017
).
48.
G.
Alagic
 et al.,
Status Report on the First Round of the NIST Post-Quantum Cryptography Standardization Process
(
US Department of Commerce, National Institute of Standards and Technology
,
Washington, DC
,
2019
).
49.
National Institute of Standards and Technology
, see https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/call-for-proposals-final-dec-2016.pdf for “Submission Requirements and Evaluation Criteria for the Post-Quantum Cryptography Standardization Process,
2016
.”
51.
National Institute of Standards and Technology
, see https://csrc.nist.gov/projects/post-quantum-cryptography/round-3-submissions for “Round 3 Submissions-Post-Quantum Cryptography,
2020
.”
52.
J.
Ding
,
M.-S.
Chen
,
A.
Petzoldt
,
D.
Schmidt
, and
B.-Y.
Yang
, see https://csrc.nist.gov/CSRC/media/Presentations/Rainbow/images-media/Rainbow-April2018.pdf for “Rainbow: NIST PQC Submission,
2020
.”
53.
V.
Lubashevsky
,
L.
Ducas
,
E.
Kiltz
,
T.
Lepoint
,
P.
Schwabe
,
G.
Siler
, and
D.
Stehle
, see https://pq-crystals.org/dilithium/index.shtml for “Dilithium: NIST PQC Submission,
2020
.”
54.
T.
Prest
,
P.-A.
Fouque
,
J.
Hoffstein
,
P.
Kirchner
,
V.
Lubashevsky
,
T.
Pronin
,
T.
Ricosset
,
G.
Siler
,
W.
Whyte
, and
Z.
Zhang
, see https://falcon-sign.info/ for “Falcon: NIST PQC Submission,
2020
.”
55.
S.
Samardjiska
,
M.-S.
Chen
,
A.
Hülsing
,
J.
Rijneveld
, and
P.
Schwabe
, see http://mqdss.org/index.html for “MQDSS: NIST PQC Submission,
2020
.”
56.
G.
Zaverucha
,
M.
Chase
,
D.
Derler
,
S.
Goldfeder
,
C.
Orlandi
,
S.
Ramacher
,
C.
Rechberger
,
D.
Slamanig
,
J.
Katz
,
X.
Wang
,
V.
Kolesnikov
, and
D.
Kales
, see https://www.microsoft.com/en-us/research/project/picnic/ for “Picnic: NIST PQC Submission,
2020
.”
57.
A.
Hülsing
,
D. J.
Bernstein
,
C.
Dobraunig
,
M.
Eichlseder
,
S.
Fluhrer
,
S.-L.
Gazdag
,
P.
Kampanakis
,
S.
Kolbl
,
T.
Lange
,
M. M.
Laurisdsen
,
F.
Mendel
,
R.
Niederhagen
,
C.
Rechberger
,
J.
Rijneveld
,
P.
Schwabe
, and
J.-P.
Aumasson
, see https://sphincs.org/index.html for “Sphincs+: NIST PQC Submission,
2020
.”
58.
H.
Imai
and
T.
Matsumoto
,
International Conference on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes
(
Springer
,
Berlin
,
1985
), pp.
108
119
.
59.
J.
Patarin
,
Annual International Cryptology Conference
(
Springer
,
Berlin
,
1995
), pp.
248
261
.
60.
C.
Wolf
,
Proceedings of YACC
(
2006
), pp.
44
55
, https://eprint.iacr.org/2015/275.pdf.
61.
H. R.
Lewis
,
J. Symbolic Logic
48
,
498
(
1983
).
62.
T.
Yasuda
,
X.
Dahan
,
Y.-J.
Huang
,
T.
Takagi
, and
K.
Sakurai
,
IACR Cryptol.
2015
,
275
.
63.
J.
Patarin
,
Dagstuhl Workshop on Cryptography
, September (
1997
).
64.
A.
Kipnis
,
J.
Patarin
, and
L.
Goubin
,
International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
1999
), pp.
206
222
.
65.
C.
Wolf
, ““
Hidden field equations” (HFE)-variations and attacks
,” Ph.D. thesis (
Verlag nicht ermittelbar
,
2002
).
66.
J.-C.
Faugere
,
F.
Levy-Dit-Vehel
, and
L.
Perret
,
Annual International Cryptology Conference
(
Springer
,
Berlin
,
2008
), pp.
280
296
.
67.
V.
Dubois
,
L.
Granboulan
, and
J.
Stern
,
International Workshop on Public Key Cryptography
(
Springer
,
Berlin
,
2007
), pp.
249
265
.
68.
O.
Billet
and
J.
Ding
, “
Overview of cryptanalysis techniques in multivariate public key cryptography
,”
Gröbner Bases, Coding, and Cryptography
(
Springer
,
Berlin
,
2009
), pp.
263
283
.
69.
M.
Ajtai
,
Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing
(
1996
), pp.
99
108
.
70.
J.
Hoffstein
,
J.
Pipher
, and
J. H.
Silverman
,
International Algorithmic Number Theory Symposium
(
Springer
,
Berlin
,
1998
), pp.
267
288
.
71.
O.
Regev
,
J. ACM
56
,
1
(
2009
).
72.
C.
Peikert
 et al.,
Found. Trends® Theor. Comput. Sci.
10
,
283
(
2016
).
73.
V.
Lyubashevsky
,
C.
Peikert
, and
O.
Regev
,
Annual International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
2010
), pp.
1
23
.
74.
C.
Grover
, “
LWE over cyclic algebras: A novel structure for lattice cryptography
,” Ph.D. thesis (
Imperial College London
,
2020
).
75.
K.
Philip
,
Proceedings of the Eleventh Annual ACM-SIAM Symposium on Discrete Algorithms
(
2000
), pp.
937
941
.
76.
J.
Howe
,
A.
Khalid
,
C.
Rafferty
,
F.
Regazzoni
, and
M.
O'Neill
,
IEEE Trans. Comput.
67
,
322
(
2018
).
77.
T.
Prest
, “
Gaussian sampling in lattice-based cryptography
,”
Ph.D.
thesis (
Ecole Normale Supérieure
,
2015
).
78.
M.
Ajtai
,
Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing
(
1998
), pp.
10
19
.
79.
D.
Aharonov
and
O.
Regev
,
J. ACM
52
,
749
(
2005
).
80.
M. R.
Albrecht
and
A.
Deo
,
International Conference on the Theory and Application of Cryptology and Information Security
(
Springer
,
Berlin
,
2017
), pp.
267
296
.
81.
M.
Roşca
,
A.
Sakzad
,
D.
Stehlé
, and
R.
Steinfeld
,
Annual International Cryptology Conference
(
Springer
,
Berlin
,
2017
), pp.
283
297
.
82.
C.
Grover
,
C.
Ling
, and
R.
Vehkalahti
, e-print arXiv:2008.01834 (
2020
).
83.
C.
Peikert
,
International Conference on Security and Cryptography for Networks
(
Springer
,
Berlin
,
2016
), pp.
411
430
.
84.
A.
Langlois
and
D.
Stehlé
,
Des. Codes Cryptogr.
75
,
565
(
2015
).
85.
A.
Roux-Langlois
, “
Lattice-based cryptography-security foundations and constructions,” Ph.D.
thesis (
Ecole Normale Supérieure de Lyon
,
2014
).
86.
C.
Gentry
,
C.
Peikert
, and
V.
Vaikuntanathan
,
Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing
(
2008
), pp.
197
206
.
87.
O.
Goldreich
,
S.
Goldwasser
, and
S.
Halevi
,
Annual International Cryptology Conference
(
Springer
,
Berlin
,
1997
), pp.
112
131
.
88.
J.
Hoffstein
,
N.
Howgrave-Graham
,
J.
Pipher
,
J. H.
Silverman
, and
W.
Whyte
,
Cryptographers' Track at the RSA Conference
(
Springer
,
Berlin
,
2003
), pp.
122
140
.
89.
N.
Phong
,
Annual International Cryptology Conference
(
Springer
,
Berlin
,
1999
), pp.
288
304
.
90.
C.
Gentry
and
M.
Szydlo
,
International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
2002
), pp.
299
320
.
91.
L.
Ducas
and
P. Q.
Nguyen
,
International Conference on the Theory and Application of Cryptology and Information Security
(
Springer
,
Berlin
,
2012
), pp.
433
450
.
92.
L.
Babai
,
Combinatorica
6
,
1
(
1986
).
93.
S.
Katsumata
,
S.
Yamada
, and
T.
Yamakawa
,
International Conference on the Theory and Application of Cryptology and Information Security
(
Springer
,
Berlin
,
2018
), pp.
253
282
.
94.
P.-A.
Fouque
 et al.,
NIST's Post-Quantum Cryptogr. Stand. Process
(
2018
), https://csrc.nist.gov/projects/post-quantum-cryptography.
95.
L.
Ducas
and
T.
Prest
,
Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation
(
2016
), pp.
191
198
.
96.
S.
Bai
, and and
S. D.
Galbraith
,
Cryptographers' Track at the RSA Conference
(
Springer
,
Berlin
,
2014
), pp.
28
47
.
97.
V.
Lyubashevsky
,
International Conference on the Theory and Application of Cryptology and Information Security
(
Springer
,
Berlin
,
2009
), pp.
598
616
.
98.
V.
Lyubashevsky
and
D.
Micciancio
,
Theory of Cryptography Conference
(
Springer
,
Berlin
,
2008
), pp.
37
54
.
99.
L.
Ducas
,
A.
Durmus
,
T.
Lepoint
, and
V.
Lyubashevsky
,
Annual Cryptology Conference
(
Springer
,
Berlin
,
2013
), pp.
40
56
.
100.
T.
Güneysu
,
V.
Lyubashevsky
, and
T.
Pöppelmann
,
International Workshop on Cryptographic Hardware and Embedded Systems
(
Springer
,
Berlin
,
2012
), pp.
530
547
.
101.
V.
Lyubashevsky
,
Annual International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
2012
), pp.
738
755
.
102.
C. P.
Schnorr
, “
Progress on LLL and lattice reduction
,”
The LLL Algorithm
(
Springer
,
Berlin
,
2009
), pp.
145
178
.
103.
S.
Lyu
,
C.
Porter
, and
C.
Ling
, e-print arXiv:1806.03113 (
2018
).
104.
Y.
Aono
,
P. Q.
Nguyen
, and
Y.
Shen
,
International Conference on the Theory and Application of Cryptology and Information Security,
(
Springer
,
Berlin
,
2018
), pp.
405
434
.
105.
T.
Laarhoven
,
Annual Cryptology Conference
(
Springer
,
Berlin
,
2015
), pp.
3
22
.
106.
T.
Laarhoven
,
M.
Mosca
, and
J.
Van De Pol
,
Des., Codes Cryptogr.
77
,
375
400
(
2015
).
107.
D.
Joseph
,
A.
Ghionis
,
C.
Ling
, and
F.
Mintert
,
Phys. Rev. Res.
2
,
013361
(
2020
).
108.
D.
Joseph
,
A.
Callison
,
C.
Ling
, and
F.
Mintert
, e-print arXiv:2006.14057 (
2020
).
109.
M.
Ajtai
,
R.
Kumar
, and
D.
Sivakumar
,
Proceedings 17th IEEE Annual Conference on Computational Complexity
(
IEEE
,
Montreal
,
2002
), pp.
53
57
.
110.
D.
Aggarwal
,
D.
Dadush
,
O.
Regev
, and
N.
Stephens-Davidowitz
,
Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing
(
2015
), pp.
733
742
.
111.
J.
Suo
,
L.
Wang
,
S.
Yang
,
W.
Zheng
, and
J.
Zhang
,
Quantum Inf. Process.
19
,
178
(
2020
).
112.
L.
Ducas
,
E.
Kiltz
,
T.
Lepoint
,
V.
Lyubashevsky
,
P.
Schwabe
,
G.
Seiler
, and
D.
Stehle
, see https://pq-crystals.org/dilithium/data/dilithium-specification-round2.pdf for “Dilithium Design Document,
2019
.”
113.
P.-A.
Fouque
 et al., see https://falcon-sign.info/falcon.pdf for “Falcon Design Document,
2019
.”
114.
M.
Chase
,
D.
Derler
,
S.
Goldfeder
,
C.
Orlandi
,
S.
Ramacher
,
C.
Rechberger
,
D.
Slamanig
, and
G.
Zaverucha
,
Proceedings of the 2017 ACM Sigsac Conference on Computer and Communications Security
(
2017
), pp.
1825
1842
.
115.
D.
Unruh
,
Annual International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
2012
), pp.
135
152
.
116.
D.
Unruh
,
Annual International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
2015
), pp.
755
784
.
117.
I.
Giacomelli
,
J.
Madsen
, and
C.
Orlandi
,
25th {Usenix} Security Symposium ({Usenix} Security 16)
(
2016
), pp.
1069
1083
.
118.
M. R.
Albrecht
,
C.
Rechberger
,
T.
Schneider
,
T.
Tiessen
, and
M.
Zohner
,
Annual International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
2015
), pp.
430
454
.
119.
D. J.
Bernstein
,
A.
Hülsing
,
S.
Kölbl
,
R.
Niederhagen
,
J.
Rijneveld
, and
P.
Schwabe
,
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
(
2019
), pp.
2129
2146
.
120.
D. J.
Bernstein
,
D.
Hopwood
,
A.
Hülsing
,
T.
Lange
,
R.
Niederhagen
,
L.
Papachristodoulou
,
M.
Schneider
,
P.
Schwabe
, and
Z.
Wilcox-O'Hearn
,
Annual International Conference on the Theory and Applications of Cryptographic Techniques
(
Springer
,
Berlin
,
2015
), pp.
368
397
.
121.
O.
Goldreich
,
Conference on the Theory and Application of Cryptographic Techniques
(
Springer
,
Berlin
,
1986
), pp.
104
110
.
122.
I.
Dinur
and
N.
Nadler
, Cryptology ePrint Archive, Report No. 2018/1212 (
2018
).
123.
C.
Rechberger
,
H.
Soleimany
, and
T.
Tiessen
, Cryptology ePrint Archive, Report No. 2018/859 (
2018
).
124.
G.
Zaverucha
et al., see https://github.com/Microsoft/Picnic/tree/master/spec for “Picnic Design Document,
2020
.”
125.
L.
Castelnovi
,
A.
Martinelli
, and
T.
Prest
,
International Conference on Post-Quantum Cryptography
(
Springer
,
Berlin
,
2018
), pp.
165
184
.
126.
A.
Genêt
,
M. J.
Kannwischer
,
H.
Pelletier
, and
A.
McLauchlan
,
IACR Cryptol. ePrint Arch.
2018
,
674
, https://eprint.iacr.org/2018/674.pdf.
127.
J.-P.
Aumasson
 et al., see https://sphincs.org/data/sphincs+-round2-specification.pdf for “Sphincs+ Design Document,
2019
.”
128.
J.
Katz
and
Y.
Lindell
,
Introduction to Modern Cryptography
(
CRC
,
Boca Raton
,
2014
).
129.
J.
Fan
,
X.
Guo
,
E.
De Mulder
,
P.
Schaumont
,
B.
Preneel
, and
I.
Verbauwhede
,
2010 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST)
(
IEEE
,
Anaheim
,
2010
), pp.
76
87
.
130.
L. A.
Tawalbeh
,
T. F.
Somani
, and
H.
Houssain
,
2016 11th International Conference for Internet Technology and Secured Transactions (ICITST)
(
IEEE
,
Barcelona
,
2016
), pp.
87
91
.
131.
Y.
Hashimoto
,
T.
Takagi
, and
K.
Sakurai
,
IEICE Trans. Fundam. Electron., Commun. Comput. Sci.
96
,
196
(
2013
).
132.
H.
Yi
and
Z.
Nie
,
Future Gener. Comput. Syst.
86
,
704
708
(
2018
).
133.
J.
Krämer
and
M.
Loiero
,
International Workshop on Constructive Side-Channel Analysis and Secure Design
(
Springer
,
Berlin
,
2019
), pp.
193
214
.
134.
J.
Ding
,
Z.
Zhang
,
J.
Deaton
,
K.
Schmidt
, and
F.
Vishakha
, the
2nd NIST PQC Standardization Conference
(
2019
).
135.
J.
Howe
,
A.
Chattopadhyay
, P.
Ravi
,
M. P.
Jhanwar
, and
S.
Bhasin
, Cryptology ePrint Archive, Report No. 2018/821 (
2018
).
136.
V.
Migliore
,
B.
Gérard
,
M.
Tibouchi
, and
P.-A.
Fouque
, Cryptology ePrint Archive, Report No. 2019/394 (
2019
).
137.
S.
McCarthy
,
J.
Howe
,
N.
Smyth
,
S.
Brannigan
, and
M.
O'Neill
, SECRYPT (
2019
), pp.
61
71
.
138.
A.
Casanova
,
J.-C.
Faugere
,
G.
Macario-Rat
,
J.
Patarin
,
L.
Perret
, and
J.
Ryckeghem
, see https://www-polsys.lip6.fr/Links/NIST/GeMSS.html for “GEMSS: NIST PQC submission,
2020
.”
139.
M.
Braithwaite
, see https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html for “Experimenting with Post-Quantum Cryptography,
2019
.”
140.
F.
Xu
,
X.
Ma
,
Q.
Zhang
 et al., e-print arXiv:1903.09051 (
2019
).
141.
T.
McGrath
,
I. E.
Bagci
,
Z. M.
Wang
 et al.,
Appl. Phys. Rev.
6
,
011303
(
2019
).
142.
D.
Gottesman
and
I. L.
Chuang
, e-print arXiv:quant-ph/0105032 (
2001
).
143.
S.
Pirandola
,
U. L.
Andersen
,
L.
Banchi
 et al., e-print arXiv:1906.01645 (
2019
).
144.
H.
Singh
,
D. L.
Gupta
, and
A. K.
Singh
,
IOSR-JCE
16
,
1
(
2014
).
145.
R. J.
Donaldson
,
R. J.
Collins
,
K.
Kleczkowska
 et al.,
Phys. Rev.
93
,
012329
(
2016
).
146.
T.-Y.
Wang
,
X.-Q.
Cai
,
Y.-L.
Ren
 et al.,
Sci. Rep.
5
,
9231
(
2015
).
147.
E.
Andersson
,
M.
Curty
, and
I.
Jex
,
Phys. Rev. A
74
,
022304
(
2006
).
148.
A.
Holevo
,
Probl. Peredachi Inf.
9
,
31
(
1973
).
149.
V.
Dunjko
,
P.
Wallden
, and
E.
Andersson
,
Phys. Rev. Lett.
112
,
040502
(
2014
).
150.
M.
Bouillard
,
G.
Boucher
,
J. F.
Ortas
 et al.,
Phys. Rev. Lett.
122
,
210501
(
2019
).
151.
R. J.
Collins
,
R. J.
Donaldson
,
V.
Dunjko
 et al.,
Phys. Rev. Lett.
113
,
040502
(
2014
).
152.
G. L.
Roberts
,
M.
Lucamarini
,
Z. L.
Yuan
 et al.,
Nat. Commun.
8
,
1
(
2017
).
153.
P. J.
Clarke
,
R. J.
Collins
,
V.
Dunjko
 et al.,
Nat. Commun.
2
,
1
(
2012
).
154.
P.
Wallden
and
V.
Dunjko
,
Phys. Rev. A
91
,
042304
(
2015
).
155.
R. J.
Collins
,
R.
Amiri
,
M.
Fujiwara
 et al.,
Opt. Lett.
41
,
4883
(
2016
).
156.
R.
Amiri
,
P.
Wallden
,
A.
Kent
 et al.,
Phys. Rev. A
93
,
032325
(
2016
).
157.
C.
Croal
,
C.
Peuntinger
,
B.
Heim
 et al.,
Phys. Rev. Lett.
117
,
100503
(
2016
).
158.
H.-L.
Yin
,
W.-L.
Wang
,
Y.-L.
Tang
 et al.,
Phys. Rev. A
95
,
042338
(
2017
).
159.
X.
Yao
,
X.
Liu
,
R.
Xue
 et al., e-print arXiv:1901.03004 (
2019
).
160.
T.-Y.
Wang
,
J.-F.
Ma
, and
X.-Q.
Cai
,
Quantum Inf. Process.
16
,
19
(
2017
).
161.
M.-Q.
Wang
,
X.
Wang
, and
T.
Zhan
,
Quantum Inf. Process.
17
,
275
(
2018
).
162.
H.
Zhang
,
X.-B.
An
,
C.-H.
Zhang
 et al.,
Quantum Inf. Process.
18
,
3
(
2019
).
163.
M.
Padgett
and
R.
Boyd
,
Philos. Trans. R. Soc. A
375
,
20160233
(
2017
).
164.
H.-L.
Yin
,
Y.
Fu
, and
Z.-B.
Chen
,
Phys. Rev. A
93
,
032316
(
2016
).
165.
I. V.
Puthoor
,
R.
Amiri
,
P.
Wallden
 et al.,
Phys. Rev. A
94
,
022328
(
2016
).
166.
H.-L.
Yin
,
Y.
Fu
,
Q.-J.
Tang
 et al.,
Phys. Rev. A
95
,
032334
(
2017
).
167.
J. M.
Arrazola
,
P.
Wallden
, and
E.
Andersson
,
Quantum Inf. Comput.
16
,
0435
(
2016
).
168.
M.
Sahin
and
I.
Yilmaz
,
J. Phys.
766
,
012021
(
2016
).
169.
G.
Zeng
and
C.
Keitel
,
Phys. Rev. A
65
,
042312
(
2002
).
170.
H.
Lee
,
C.
Hong
,
H.
Kim
,
J.
Lim
, and
H. J.
Yang
,
Phys. Lett. A
321
,
295
(
2004
).
171.
Q.
Li
,
W. H.
Chan
, and
D.-Y.
Long
,
Phys. Rev. A
79
,
054307
(
2009
).
172.
X.
Zou
and
D.
Qiu
,
Phys. Rev. A
82
,
042325
(
2010
).