We consider the problem of semantic security via classical-quantum and quantum wiretap channels and use explicit constructions to transform a non-secure code into a semantically secure code, achieving capacity by means of biregular irreducible functions. Explicit parameters in finite regimes can be extracted from theorems. We also generalize the semantic security capacity theorem, which shows that a strongly secure code guarantees a semantically secure code with the same secrecy rate, to any quantum channel, including the infinite-dimensional and non-Gaussian ones.

We investigate the transmission of messages from a sending party to a receiving party through a wiretap channel. In this model, there is a third party called an eavesdropper who must not be allowed to know the information sent from the sender to the intended receiver. The wiretap channel was first introduced by Wyner in Ref. 1. A classical-quantum channel with an eavesdropper is called a classical-quantum wiretap channel.

The secrecy capacity of the classical-quantum wiretap channel subject to the strong security criterion was determined in Refs. 2 and 3. Strong security means that given a uniformly distributed message sent through the channel, the eavesdropper shall obtain no information about it. This criterion goes back to Refs. 4 and 5, and it is the most common secrecy criterion in classical and quantum information theory.

In the present paper, however, a stronger security requirement will be applied, called semantic security (and defined in Sec. III). With this, the eavesdropper gains no information regardless of the message distribution. This criterion was introduced to information theory from cryptography6 and also used earlier in Quantum Key Distribution (QKD) in Ref. 7, Theorem 2, motivated by the analogous security criterion of the same name. It is equivalent to message indistinguishability, where the eavesdropper cannot distinguish whether the given cipher text is an encryption of any two messages (which can even be chosen by the eavesdropper). Aside from being the minimum security requirement in practical applications, semantic security is also necessary in the security of identification codes.8 Because in identification pairs of messages are compared to each other, to make the code secure, any two messages must be indistinguishable at the eavesdropper.9 Message indistinguishability is, thus, necessary to construct secure identification codes, and the semantic security achieved in this paper can, thus, be used to construct secure identification codes via classical-quantum channels.10 At the same time but on a different note, bounds on identification find application on transmission via the wiretap channel.11 Further results and references can be found in Ref. 12.

In Sec. IV, we prove the well-known semantic-secrecy capacity formula, which is equal to the capacity formula under the strong security criterion, for the most general case of arbitrary, including the infinite-dimensional channel, under the most stringent semantic security criterion (in terms of mutual information). This can be easily shown by slightly modifying the expurgation method,12 whereby a code that has a small leakage with respect to the strong security criterion is converted into a code that has a small leakage with respect to the semantic security leakage, making the strong and semantic secrecy capacities equal. This statement was at first proven for classical wiretap channels in Ref. 14 and for finite-dimensional classical-quantum channels in Ref. 15 and with security measured in the trace norm in Ref. 16. Note that that the expurgation technique given in Ref. 12 works for any channels, including the infinite-dimensional channel and even non-Gaussian ones. The results for the classical-quantum channel extend to quantum channels where the environment, which is completely under the control of a constant eavesdropper, can be entangled with the quantum system.

FIG. 1.

The modular BRI scheme. C={Em,Dm:mM} is a code for W, and f is a biregular irreducible (BRI) function. In practice, C will be a transmission code, namely, a code with low error. fs1 denotes the random choice of an element of fs1(m) for the given message m and seed s. The seed s has to be known to the sender and receiver beforehand, or it is generated by the sender and transmitted to the intended receiver through the channel.

FIG. 1.

The modular BRI scheme. C={Em,Dm:mM} is a code for W, and f is a biregular irreducible (BRI) function. In practice, C will be a transmission code, namely, a code with low error. fs1 denotes the random choice of an element of fs1(m) for the given message m and seed s. The seed s has to be known to the sender and receiver beforehand, or it is generated by the sender and transmitted to the intended receiver through the channel.

Close modal

The proofs applying the expurgation technique are merely existence statements and give no clue as to how to find the large message subset, which provides semantic security. In Sec. V, we show how the capacity can be achieved by modularly correcting transmission error and amplifying privacy in separate components of the code as for the case of strong secrecy.17 While we make the proof for finite-dimensional channels, the codes also automatically achieve capacity for quantum Gaussian wiretap channels since the capacity is achieved as the limit of the capacities on finite-dimensional subspaces of the increasing dimension.18 These modular codes for the classical-quantum wiretap channel are constructed, concatenating an ordinary transmission code for the channel from the sender to the intended receiver with an additional security component. Furthermore, the additional security component is independent of the channel (sometimes called channel universality15) as for the case of explicit strong-secrecy constructions. The first such security components used in the literature were universal hash functions,17,19 used to achieve strong secrecy. The specific security components we use, called biregular irreducible (BRI) functions, were introduced in Ref. 14 in the context of classical wiretap channels. A modular code for the classical-quantum wiretap channel is illustrated in Fig. 1.

If a transmission code from the sender to the intended receiver with input/output set C is given, then a BRI function that is to be used with this transmission code has the form f:S×CN. Here, S is a seed set, and the set M of messages of the modular wiretap code is an explicitly given subset of N. To use this modular code, the sender and the intended receiver have to share a seed sS, chosen uniformly at random from S. Given any message mM and seed s, the sender randomly chooses a preimage cC, satisfying fs(c) = m. Since the intended receiver knows s, the receiver can recover m if no transmission error occurs. Thus, the task of establishing reliable transmission is entirely due to the transmission code, while the BRI function’s responsibility is to ensure semantic security. The above modular construction was already shown to achieve the secrecy capacity of classical wiretap channels with semantic security in Ref. 14. An alternative to BRI functions was proposed by Hayashi and Matsumoto.15 Their example, however, requires a seed, which is longer than that which is necessary for the best-known BRI function. The length of the seed is relevant for the efficiency of the derandomized codes at finite regimes.

We emphasize that the seed is not a secret key since we do not require it to be unknown to the eavesdropper. The main part of the analysis of the above modular codes assumes that the seed is given (non-securely) to the sender and the intended receiver by common randomness. However, it is a general result for codes with common randomness that if the error probability and security leakage decrease sufficiently fast in block length, the seed can be reused a small number of times. Modular codes constructed using BRI functions show this behavior. Therefore, no more than a negligible amount of rate is lost if the sender generates the seed and transmits it to the intended receiver and, then, reuses the seed a small number of times. In particular, any rate that is achievable with a seed given by common randomness is also achievable with a sender-generated seed.

Moreover, we would like to emphasize that the semantic secrecy for classical-quantum channels is much harder than for classical channels. Roughly speaking, two different inputs not only result in two different random variables, but the outputs also have different eigenspaces, so it is more difficult to make them indistinguishable. For instance, Corollary 5.10 delivers a bound that is technically weaker than the classical version of Ref. 14 (see below).

The content of this section can be found in most books covering the basics of matrix analysis and quantum information; see, for example Refs. 20–22. For a finite set X, we denote the set of probability distributions on X by P(X) and with E the expectation value. For a finite-dimensional complex Hilbert space H, we denote the set of linear operators on H with L(H). Let ρ,σL(H) be Hermitian operators in L(H). We say ρσ, or, equivalently, σρ, if ρσ is positive-semidefinite. The (convex) space of density operators on H is defined as
S(H)ρL(H):ρ0H,tr(ρ)=1,
where 0H is the null matrix on H. Note that any operator in S(H) is bounded. A POVM (positive-operator valued measure) over a finite set M is a collection of positive-semidefinite operators Dm:mM on H, which is a partition of the identity, i.e., mMDm=idH. The POVM describes a measurement that maps quantum states ρ to classical values mM by assigning them the probability tr[ρDm]. If mMDmidH, then we call Dm:mM a sub-POVM. More generally, a measurement operator will be any positive semi-definite operator D satisfying 0 ≤ D ≤ idH.
For a quantum state ρS(H), we denote its von Neumann entropy by
S(ρ)tr(ρlogρ),
and for a discrete random variable X, on a finite set X, we denote the Shannon entropy of X by
H(X)xXp(x)logp(x),
where we use in both definitions and throughout this paper the convention that the logarithm “log” is taken in base 2. We denote with h(ν) ≔ −ν log ν − (1 − ν)log(1 − ν) for ν ∈ [0, 1] the binary entropy.
Let ρ and σ be two positive semi-definite operators not necessarily in S(H). The quantum relative entropy between ρ and σ is defined as
D(ρσ)trρlogρlogσ
if supp(ρ) ⊂ supp(σ) and D(ρσ) ≔ ∞ otherwise. For α ∈ (0, 1) ∪ (1, ∞), the Rényi relative entropy23 between ρ and σ is defined as
Dα(ρσ)1α1logtrρασ1α
if supp(ρ) ⊂ supp(σ) and Dα(ρσ) ≔ ∞ otherwise. The Rényi relative entropy satisfies the ordering relation or parameter monotonicity,
Dα(ρσ)Dα(ρσ),
for any density operators ρ and σ and any αα′.24,25 Furthermore, it holds23 that
limα1Dα(ρσ)=limα1Dα(ρσ)=D(ρσ).
The Rényi relative entropies also satisfy channel monotonicity (monotonicity under quantum channels) for α ≤ 2,25–27 namely, under completely positive trace preserving linear maps Λ,
Dα(Λ(ρ)Λ(σ))Dα(ρσ).
For finite-dimensional complex Hilbert spaces H and H′, a quantum channel N(ρ) is represented by a completely positive trace-preserving linear map N:L(H)L(H), which accepts input quantum states in S(H) and produces output quantum states in S(H). Quantum channels will be treated in Sec. III D, building on the results for classical-quantum channels. The case of classical-quantum channels will be treated in Secs. IIIV. For a finite-dimensional complex Hilbert space H, a classical-quantum channel is a map V:XS(H), xV(x). In order to use the same notation common in classical information theory, for a measurement operator 0 ≤ D ≤ idH, we define the following notation:
ρ(D)tr(ρD)V(D|x)tr(DV(x)).
Note that this notation, at least for the quantum state, is also common in the C* algebra literature, where quantum states are considered functionals on Hermitian operators.
For a probability distribution P and a classical-quantum channel V on X, the Holevo χ quantity, or Holevo information, is defined as
χ(P;V)SxXP(x)V(x)xXP(x)SV(x).
The Holevo information is also the mutual information between the input and the output. Namely, let HX be a |X|-dimensional Hilbert space with a set of orthonormal basis {x:xX} and X be the random variable on X with distribution P; then, the Holevo information is the quantum mutual information I(XV(X)) for the state
ρX,VxXP(x)xxV(x).
The Holevo quantity can also be written as the expected value of the quantum relative entropy as a random variable of the states from the product states of their marginals, namely, if we denote the marginal with
ρXxXP(x)xxPVxXP(x)V(x)=EXXX,=EXV(X),
it is easy to check that the following holds:
χ(P;V)=S(PV)xXP(x)SV(x)=DρX,VρXPV
(1)
=xXP(x)DV(x)PV=EXD(V(X)EXV(X)).
(2)
We, then, denote the conditional information of the quantum part conditioned on the classical one as
S(V|P)S(PV)χ(P;V)=xXP(x)SV(x).

In this section, we introduce the channels, codes, and capacities, which we will study, as well as the definitions for strong secrecy and semantic security with and without common randomness. In Sec. V, we will show how to explicitly build such codes using modular coding schemes, which require common randomness. For now, let us define the channels of interest.

Definition 3.1.

Let X be a finite set and H and Hbe finite-dimensional complex Hilbert spaces. Let W:XS(H) and V:XS(H) be classical-quantum channels. We call the pair (W, V) a classical-quantum wiretap channel.

The intended receiver accesses the output of the first channel W, and the eavesdropper observes the output of the second channel V in the pair.

A code is created by the sender and the intended receiver beforehand. The sender uses the encoder to map the message that he wants to send to a channel input, while the intended receiver uses the family of decoder operators on the channel output to perform a measurement and decode the message. In all the definitions below, let us fix nN, finite sets M and X, finite quantum systems H and H′, and a classical-quantum wiretap channel (W, V) from X to H and H′. Just like for the notation PV for a channel V:XS(H) and a probability distribution P over X, we define EV for a classical channel E:MP(X), mEmE(·|m) as
EV:MS(H),mxXV(x)E(x|m).
Note that now, for a measurement operator D, we have two ways of writing the output probability, given m, namely, EmV(D) and EV(D|m).

Definition 3.2
(code, error, and leakage). An (n,|M|) code for (W, V) is a finite set
C=Em,Dm:mM,
where the stochastic encoder E:MP(Xn) is a classical channel and the decoder operators Dm:mM form a sub-POVM on Hn. We assume that the POVM is completed by associating the measurement operator idHmMDm with the error/abortion symbol of the decoder.
The (maximum) error (probability) of C is defined as
e(C,n)supmMEmWn(Dmc)=supmMxnXnE(xn|m)(1Wn(Dm|xn)),
where DmcidHnDm. For any random variable M over the messages M, the leakage of C with respect to M is defined as
χM;EVn.

Note that we have chosen maximum rather than average transmission error; the reason is that as we allow the probability distribution to be arbitrary, the correctness and not just the secrecy of the message should also be guaranteed independently of the input distribution. Maximum error is the counterpart to measuring the leakage via maximum indistinguishability between the messages [generally via the trace distance (Ref. 12, Sec. II-E) or statistical distance for classical channels].

Observe that we consider codes with stochastic encoders as opposed to deterministic codes. In a deterministic code, the encoder is deterministic, namely, in Definition 3.2 instead of a family of probability distributions {Em}mM, the encoder consists of a family of n-length strings of symbols cmmMXn.The deterministic encoder can be obtained as a special case of the stochastic encoder by imposing that every probability distribution Em is deterministic. For message transmission over an ordinary classical-quantum channel and even for the most general case of robust message transmission over an arbitrarily varying classical-quantum channel, it is enough to use deterministic encoders.28,29 However, for secret message transmission over wiretap channels, we need to use stochastic encoders.2,3

Now, we will define the coding scheme where both the sender and the receiver have access to common randomness. We do not require this common randomness to be secure against eavesdropping. Effectively, the common randomness simply decides which among a set of classical-quantum codes from Definition 3.2 will be used.

Definition 3.3
(common-randomness code, error, and leakage). An (n,|S|,|M|) common-randomness code for (W, V) is a finite subset
Cs=(Ems,Dms):mM:sS
of the set of (n,|M|) codes from Definition 3.2, labeled by a finite set S.
Let S, the seed, be a uniform random variable over S. The (expected) error of Cs:sS is defined as
ESe(CS,n)=1|S|sSe(Cs,n).
For any random variable M over the messages independent of S, the leakage of Cs with respect to M is defined as
χM;S,ESVn.
(3)

The definition of leakage reflects the possibility that the common randomness is known perfectly to the eavesdropper as the leakage is also computed against the common randomness. Observe that due to the independence of S and M, the leakage can be written as conditional mutual (or Holevo) information between the message M and the output conditioned on the seed S,
χ(M;S,ESVn)=ESχ(M;ESVn)=1|S|sSχ(M;EsVn).
(4)

The random seed should not be confused with the randomness of the stochastic encoder. In the stochastic encoder, only the sender, but not the receiver, randomly chooses a code word to encode a message m according to the probability distribution Em. In the subsequent definitions of achievable rates, the receiver should be able to decode m even when the receiver only knows Em, but not which code word is actually chosen by the sender. In contrast, a randomly chosen seed s determines a stochastic encoder Es for the sender and a set of decoder operators {Dms:mM} for the receiver. Correctness is required only for the case that s is known to both the sender and the receiver and that they use the encoder and decoder prescribed by s.

Next, we define the strong and semantic secrecy rates, which can be achieved by the codes introduced in Subsection III A. A good code reliably conveys private information to the intended receiver such that the wiretapper’s knowledge of the transmitted information can be kept arbitrarily small in terms of the corresponding secrecy criterion.

Definition 3.4
(strong secrecy). A code C=(Em,Dm):mM is an (n, R, ϵ) strong secrecy code for (W, V) if
log|M|nR,
(5)
e(C,n)<ϵ,
(6)
χU;EVn<ϵ,
(7)
where U is the uniform distribution on M.

R is an achievable strong secrecy rate if for every ϵ > 0 and sufficiently large n, there exists an (n, Rϵ, ϵ) strong secrecy code. The strong secrecy capacity Cstrong(W, V) is the supremum of all achievable strong secrecy rates of (W, V).

Definition 3.5
(common-randomness strong secrecy). A common-randomness code Cs=(Ems,Dms):mM:sS is an (n, R, ϵ) common-randomness strong secrecy code for (W, V) if
log|M|nR,
(8)
ESe(CS,n)<ϵ,
(9)
ESχU;S,ESVn<ϵ,
(10)
where U is the uniform distribution on M.

R is an achievable common-randomness strong secrecy rate if for every ϵ > 0 and sufficiently large n, there exists a (n, Rϵ, ϵ) common-randomness strong secrecy code. The common-randomness strong secrecy capacity Cstrong(W, Vcr) is the least upper bound of all achievable common-randomness strong secrecy rates of (W, V).

Since codes without common randomness are just a special case of common-randomness codes, we have by construction that
Cstrong(W,V)Cstrong(W,V;cr).

Strong secrecy, i.e., the requirements of Eqs. (7) and (10), is the secrecy criterion, which has been used mostly in information-theoretic security until the introduction of semantic security in Ref. 6. It provides secrecy if the message random variable is uniformly distributed. Inspired by cryptography, the authors of Ref. 6 introduced semantic security, where the eavesdropper shall not obtain any information regardless of the probability distribution of the message. This is also the reason why we use the maximum instead of the average transmission error. Semantic security and indistinguishability for the classical-quantum channels were first considered in Ref. 12. Here, we state the semantic security definitions.

Definition 3.6
(semantic secrecy). A code C=(Em,Dm):mM is an (n, R, ϵ) semantic secrecy code for (W, V) if
logMnR,
(11)
e(C,n)<ϵ,
(12)
maxMχM;EVn<ϵ,
(13)
where M is any random variable over the messages M.

R is an achievable semantic secrecy rate if for every ϵ > 0 and sufficiently large n, there exists a (n, Rϵ, ϵ) semantic secrecy code. The semantic secrecy capacity Csem(W, V) is the supremum of all achievable semantic secrecy rates of (W, V).

Definition 3.7
(common-randomness semantic secrecy). A common-randomness code Cs=(Ems,Dms):mM:sS is an (n, R, ϵ) common-randomness semantic secrecy code for (W, V) if
log|M|nR,
(14)
ESe(CS,n)<ϵ,
(15)
maxMχ(M;S,ESVn)<ϵ,
(16)
where M is any random variable over the messages M.

R is an achievable common-randomness semantic secrecy rate if for every ϵ > 0 and sufficiently large n, there exists a (n, Rϵ, ϵ) common-randomness semantic secrecy code. The common-randomness semantic secrecy capacity Csem(W, Vcr) is the supremum of all achievable common-randomness semantic secrecy rates of (W, V).

Just like for strong secrecy and due to common-randomness codes being more general, we have by construction
Csem(W,V)Csem(W,V;cr).
Similarly, the semantic secrecy condition is stronger, meaning that any semantically secure capacity achieving code family is also a strongly secure code family, and thus,
Csem(W,V)Cstrong(W,V),Csem(W,V;cr)Cstrong(W,V;cr).
Since the maxima in Eqs. (13) and (16) range over all possible message distributions, semantic security, in particular, implies message indistinguishability. This means that even if the message random variable can only assume one of two possible values known to the eavesdropper, the eavesdropper cannot distinguish between these two messages. This is not implied by strong secrecy alone.

Note that since the leakage of the common-randomness codes in Eq. (3) is computed against the state at the wiretap and the seed, bounding the leakage in the common-randomness capacities implies bounding the information about the key carried by the seed. Thus, the common randomness is not required to be secure against eavesdropping since Eqs. (10) and (16) impose that the seed carries no information, and thus, it is considered to be public.

Derandomization is a standard and widely used technique in information theory, already used by Ahlswede in Ref. 30. As a final result, in this section, we apply the derandomization technique to good common-randomness semantic-security codes, namely, we construct a semantic-security code without common randomness using a transmission code and a common-randomness semantic-security code with appropriate error scaling. These derandomized codes will essentially be able to produce the common randomness needed to run the common-randomness codes using an asymptotically small number of copies of the channel. The proof mimics the classical case showed in Ref. 14.

A simple idea that uses too many channels to generate the seed, however, is to alternate transmission codes and common-randomness semantic-secrecy codes, use the transmission code to generate the seed, and use it only once in the common randomness semantic-security code. Depending on the size of the required seed, this may result in too many channels used just for the seed. The solution is to simply reuse the seed, thus reducing the total size of |S| by sharing the same sS for N common-randomness codes. We, thus, need to build (N + 1)-tuple of codewords as the new codewords. Each tuple is a composition of a first codeword that generates the common-randomness and N common randomness-assisted codewords to transmit the messages to the intended receiver. We start by defining such codes.

Definition 3.8

(derandomizing codes). Let n,n,NN.

  • Let Es,DssS be an (n,|S|) code.

  • Let Ems,Dms:mM:sS be an (n,|S|,|M|) common-randomness code, and define M̄MN, and for any m̄M̄,
    Em̄sEm1sEmNsDm̄sDm1sDmNs.
    We define their (n+nN,|M|N) derandomized code C̄ to be the code (without common randomness) such that for any message m̄M̄, we have the following.
  • The encoder samples from a uniform seed S and, then, conditioned on the values s use the Kronecker product encoder EsEm̄s. Thus,
    Ēm̄ESESEm̄S.
  • The decoder for the message is the coarse graining of decoders over s,
    D̄m̄sSDsDm̄s.

Note that the random seed in the derandomizing code becomes part of the stochastic encoding process of the code. As we expect, the error and the leakage of the derandomizing code are not worse than the sum of the errors and leakage of all the codes used in the process. This can be easily proved by simply applying the standard techniques (cf. Refs. 31 and 32) for derandomization with uniform distributed inputs on derandomization with arbitrary distributed inputs. Note that the standard proof of security (cf. Ref. 32) is nothing more than applying the quantum data processing inequality (cf. Ref. 22) when we consider the derandomizing code as a function of its first part. Thus, this argument works for any input distribution. Nevertheless, we give a proof for the sake of completeness.

Lemma 3.9.

Let C be an (n,1nlog|S|,ϵ) transmission code, and let CssS be an (n, R, ϵ) common-randomness semantic-secrecy code. Let n̄n+nN; then, the N-derandomized code C̄ is an (n̄,nNn̄R,ϵ+ϵN) semantic-secrecy code.

Proof.

The N-derandomized code has size |M|N; thus, the rate is Nlog|M|/n̄nR/n̄. We just need to bound the error and leakage of the new code.

For the error of C̄, by standard argument, we have that for every m̄M̄,
Ēm̄Wn̄(D̄m̄c)=1s1|S|EsWnEm1sWnEmNsWn(D̄m̄)=1s,s1|S|EsWn(Ds)Em1sWn(Dm1s)EmNsWn(DmNs)1s1|S|EsWn(Ds)Em1sWn(Dm1s)EmNsWn(DmNs)1s1|S|(1ϵ)(1ϵ)Nϵ+ϵN,
and thus, e(C̄,n̄)ϵ+ϵN.
For the leakage, since we are reusing the seed and the seed is shared via the transmission code, a uniform seed does not map a random message to a uniformly random input to the channel. Thus, we need to reduce the security to the security of the single codes. Recalling that the Holevo information is actually mutual information, by data processing and Eq. (4), we have
χM̄;ĒVn̄=χM̄;ESVnEM̄SVnNχM̄;S,EM̄SVnNESχM̄;EM̄SVnN=ESHEM̄EM̄SVnNEM̄HEM̄SVnN.
Since the message encoder is a Kronecker product of encoders and the channel is memoryless, we have HEM̄SVnN=HEMiSVn, and thus,
χ(M̄;ĒVn̄)ESHEM̄EM̄SVnNEM̄i=1,,NH(EMiSVn).
This together with H(XY) ≤ H(X) + H(Y) applied to HEM̄EM̄SVnN gives
χ(M̄;ĒVn̄)ESi=1,,NHEMiEMiSVni=1,,NEM̄HEMiSVn=ESi=1,,NχMi;EMiSVn=i=1,,NχMi;S,EMiSVnϵNϵ+ϵN,
(17)
and the proof is concluded.

Note that the argument works for any distribution of M̄; the single uses of the semantic-secrecy code do not need to have independent messages. This is usually a point of difference with the derandomization techniques used for strong secrecy. In strong secrecy, M̄ is only required to be uniformly distributed, which makes each Mi already independent and also uniformly distributed. This allows for an easier but not fully general argument since the leakage of the derandomized code is actually equal to the sum of the leakages of the single internal codes.

We will use the above in Sec. V to derandomize the explicit constructions of semantic secrecy codes.

The results from classical secret message transmission over classical-quantum channels can usually be carried over to fully quantum channels. Moreover, this is optimal in the sense that it is usually enough to just prepend a classical-quantum preprocessing channel to many copies of the quantum channel and, then, use the coding for the resulting classical-quantum channel. The extension to quantum channels reduces to simply proving Corollary 4.2, which is straightforward and uses quite general arguments. More precisely, since the encoding of classical messages for any quantum channel will need to map the classical messages to quantum states, the resulting effect at the sender is again a classical-quantum channel, and thus, we can reduce the analysis to what we have done so far for classical-quantum channels.

For classical and classical-quantum channels, the wiretap channel must be given in the sense that an assumption must be made about the output seen at the eavesdropper simply because the worst case scenario that the eavesdropper receives a noiseless copy of the input is always physically possible. This is not the case for quantum channels, where one of the aspects of no-cloning implies that a copy of the input quantum state cannot be made, and the worst case interaction with the environment can be deduced from the noise in the channel. Since there is a limit to the information that it is leaked to the environment, there is, thus, also a limit to the information of the eavesdropper, and we can then remove any assumption in that respect and identify the eavesdropper with the environment.3,33

Let now P and Q be quantum systems, and let W be a quantum channel. We assume, as usual in the quantum setting, the worst case scenario, namely, that the environment E is completely under the control of the eavesdropper, which is in contrast with the classical and classical-quantum setting where this worst case scenario does not allow for secrecy. This automatically defines the wiretap channel(W, V) for any given quantum channel W to the intended receiver. However, the results below work, in general, for any allowed pair of quantum channels (W, V) on the same input.

Definition 3.10.

A quantum wiretap channel from a sender P to a receiver Q with eavesdropper E is a pair of complementary channels (W, V), where W:S(HP)S(HQ) and V:S(HP)S(HE) are defined as W(ρ)=trEUρU* and V(ρ)=trQUρU* for some isometry U:HPHQHE.

Remark 3.11.

Without the assumption that the eavesdropper might have full access to the environment, the treatment of the semantic secrecy capacity is still the same. In this case, the wiretap channel must be specified explicitly as (W, V), where both W and V are quantum channels. However, not all pairs are allowed as V must be a channel that can be recovered from the environment. The generalization is that W and V must be of the form W=trERUρU* and V=trQRUρU*, where now the isometry U:HPHQHEHR maps to three systems, the intended receiver, the eavesdropper, and an environment not in possession of the eavesdropper.

We can transmit both classical and quantum information over quantum channels. For the transmission of classical information via a quantum channel, we first have to convert a classical message into a quantum state. We assume that the states produced in the input system are constructed depending on the value of xX, where X is a finite set of letters. Let, thus, F:XS(HP) be this classical-quantum channel. The composition with a quantum channel W defines the classical-quantum channel WF:XS(HQ); to keep a consistent notation, we define FWWF. With this notation, the definitions present only minimal changes in comparison to the classical-quantum wiretap channels above. A code for the quantum channels now simply needs to input quantum states instead of classical values.

Definition 3.12.

An (n,|M|) quantum code for a quantum channel W consists of a finite set C=Em,Dm:mM, where the stochastic encoder E:MS(HPn) is a classical-quantum channel, and the decoders Dm:mM form a sub-POVM.

The error of C is defined as
e(C,n)1|M|maxmMEmWn(Dmc).
The leakage of a message random variable M over M is defined as
χM;EmVn,
where V is the complementary channel to the environment.

The rates and capacities can, then, be defined exactly as is done for classical-quantum channels. Since we will use these definitions only briefly in Corollary 4.2, we limit ourselves to directly defining the capacities.

Definition 3.13.
The strong secrecy capacity Cstrong(W) is the largest real number such that for every ϵ > 0 and sufficiently large n, there exists a finite set X and an (n,|M|) code C=Em,Dm:mM such that
log|M|>n(Cstrong(W)ϵ),
(18)
e(C,n)<ϵ,
(19)
χU;EmVn<ϵ,
(20)
where U is the uniform random variable over M.

Definition 3.14.
The semantic secrecy capacity of W, denoted by Csem(W), is the largest real number such that for every ϵ > 0 and sufficiently large n, there exists an (n,|M|) code C={Em,Dm:mM} such that for any random variable M with arbitrary distribution on M,
log|M|>n(Rϵ),
(21)
e(C,n)<ϵ,
(22)
χM;EmVn<ϵ.
(23)

Note that the choice of the environment channel does not affect the definitions of capacity. Let V and V′ be two distinct complementary channels to W; then, V′ and V are equivalent in the sense that there is a partial isometry U such that for all input states ρS(HP), we have V′(ρ) = U*V(ρ)U.34,35 The action of the partial isometry is reversible, and thus, the leakage is the same (being a mutual information, which is non-increasing under local operations). Therefore, the security criteria in Definitions 3.13 and 3.14 do not depend on the choice of the complementary channel.

With the definitions in place, we prove in Sec. IV that we can change any strong secrecy capacity achieving codes into semantic secrecy capacity achieving codes. However, the result is non-constructive, which is why in Sec. V we provide a semi-constructive proof where we concatenate functions to suitable transmission codes to convert them into semantic secrecy capacity achieving codes. Section VI provides perspectives on the extension of our results to more general channel models.

We denote
Cw(W,V)supn1nmaxP,E(χ(P;EWn)χ(P;EVn)),
(24)
where “w” stands for the wiretap and the maximum is taken over finite input sets M, input probability distributions P on M, and classical channels E:MP(Xn).
Cw(W, V) was first proven in Ref. 3 to equal the strong secrecy capacity of the classical-quantum wiretap channel. The result was extended in Ref. 36 to the common-randomness strong secrecy capacity as a particular case of arbitrarily varying classical-quantum wiretap channels. Namely, we have
Cstrong(W,V;cr)=Cstrong(W,V)=Cw(W,V).
(25)
For now, we will not actually use the explicit expression of Cw. Since a semantically secure code is also always strongly secure, the converse theorems for strong secrecy are also strong converses for semantic secrecy, as displayed schematically in Fig. 2.
FIG. 2.

Relationship between the secrecy capacities and the coding theorems. “A” follows from Refs. 3 and 36, “B” follows from Refs. 3 and 36, and “C” follows from Theorem 4.1. The inequalities without reference are obvious and follow from the definitions; allowing common randomness can only increase the capacity. Similarly, relaxing from a semantically secure code to strongly secure codes can only increase the capacity. It follows from Cw(W, V), being both the upper bound and the lower bound, that all the quantities are equal.

FIG. 2.

Relationship between the secrecy capacities and the coding theorems. “A” follows from Refs. 3 and 36, “B” follows from Refs. 3 and 36, and “C” follows from Theorem 4.1. The inequalities without reference are obvious and follow from the definitions; allowing common randomness can only increase the capacity. Similarly, relaxing from a semantically secure code to strongly secure codes can only increase the capacity. It follows from Cw(W, V), being both the upper bound and the lower bound, that all the quantities are equal.

Close modal
By the results of Ref. 12, it is easy to see that Csem(W, V) ≥ Cstrong(W, V) when we apply the standard expurgation technique given in Ref. 12 to our channel to convert a strong secrecy code into a semantic secrecy code without asymptotic rate loss (see also Ref. 15 for the classical finite case): For any ϵ > 0, by the definition of Cstrong(W, V), there exists δ > 0 such that for all sufficiently large n, there exists an (n,|Mn|) strong secrecy code Cn satisfying |Mn|2n(Cstrong(W,V)ϵ), e(Cn,n)2nδ, and χ(UnEVn) ≤ 2, where Un is the uniform distribution over Mn and E is the encoder of the code. We have χ(Un;EVn)=1|Mn|mMnD(EmVnUnEVn). Thus, as per expurgation in Ref. 12, there is a subcode Cn of size MnMn/2 such that we can choose 0 < δ′ < δ such that for any mMn and sufficiently large n, we have D(EmVnUnEVn) ≤ 2 · 2 < 2′. Note that the encoder is the same; we are just restricting the set of messages. Then, for any probability distribution P on the new message set Mn, by Ref. 37, Eq. (4.7), we have
χ(P;EVn)=mMnPmD(EmVnPEVn)
(26)
=minσmMnP(m)D(EmVnσ)
(27)
mMnPmD(EmVnUnEVn)
(28)
2nδ.
(29)
Since Cn is a subcode, we also have e(Cn,n)2nδ<2nδ. Since Mn contains at least half of the messages of Mn, the expurgation technique given in Ref. 12 immediately delivers the following theorem, independently of the type of the channel.

Theorem 4.1.
Let (W, V) be a classical-quantum wiretap channel. With the same notation as in Eq. (25), we have
Csem(W,V)=Csem(W,V;cr)=Cstrong(W,V)=Cw(W,V).

Note that the expurgation technique given in Ref. 12 makes no assumption on the dimension of the quantum systems, namely, the outputs of the wiretap channels considered below can be infinite dimensional.

Let W now be a quantum channel, thus defining the quantum wiretap channel to be the complementary channel to the environment. Just like the case of the classical-quantum channel, the strong and semantic secrecy capacities are equal. This time, rather than transforming a strong secrecy code into a semantic secrecy code, we simply generalize the result from classical-quantum channels to quantum channels in the same way as was done for strong secrecy in Ref. 3.

In Ref. 3, it was proven that the strong secrecy capacity Cstrong(W) can be computed using the following multi-letter formula:
Cstrong(W)=supnN1nsupX,F,Pχ(P,FWn)χ(P,FVn),
(30)
where V is the channel to the environment defined by W. The supremum is taken over all chosen finite sets X, classical/quantum channels F:XS(HPn), and probability distributions P on X. Note how the classical-quantum channel is allowed to output entangled states between the inputs of the channels.

Just like for classical-quantum channels, any semantic secrecy code is also a strong secrecy code, and the strong secrecy capacity is a converse on the semantic secrecy capacity. Again, we only need the achievability proof. The achievability of this rate follows directly from the achievability of the wiretap capacity for classical-quantum channels. Since the proof is actually independent of the structure of the secrecy criterion, the same proof for strong secrecy also works for semantic secrecy.

Corollary 4.2.
Let W be a quantum channel. We have
Csem(W)=supnN1nsupX,F,Pχ(P,FWn)χ(P,FVn).
(31)

Proof.
We prove the claim for any fixed n, X, P, and F, namely, that
1nχ(P,FWn)χ(P,FVn)
is an achievable rate, and the supremum then follows automatically.

Note that χ(P, FWn) − χ(P, FVn) is already an achievable rate for the classical-quantum channel (FWn, FVn), and thus, for all ϵ > 0 and all n′, there exist an [n′, χ(P, FWn) − χ(P, FVn) − ϵ, ϵ] code Em,Dm for (FWn, FVn) as stated in Theorem 4.1. It follows by construction and definition that EmFn,Dm is a [nn, χ(P, FWn) − χ(P, FVn) − ϵ, ϵ] code for W with rate divided by n.

Since we can reduce the classical semantic secrecy capacity of quantum channels to the one of classical-quantum wiretap channels, we can restrict ourselves to the latter in our analysis.

We have proven that whenever strong secrecy is achievable, then semantic security is also achievable. However, the proof technique does not tell us how to practically construct such codes, and the subset of semantically secure messages chosen in Theorem 4.1 will, in general, depend on the channel and the code. In Sec. V, we will address this issue and show how to construct such codes, similar to how hash functions are used to achieve strong secrecy.

However, the expurgation technique gives us only an existence statement and does not answer the question of how to choose the semantically secure message subsets. In this section, we introduce BRI functions and use them to construct semantic secrecy capacity achieving BRI modular codes in Theorem 5.12, thus also providing an alternative to the achievability Proof of Theorem 4.1 in Sec. IV. We will construct such codes requiring common randomness and will at first only show achievability via common-randomness BRI modular codes. An additional derandomization step will be required to construct codes without common randomness. The idea behind the construction of semantic-secrecy BRI modular codes is similar to the way in which strong secrecy codes are constructed using first a transmission code to correct all the errors, but substituting the use of strongly universal hash functions with the use of BRI functions to erase the information held by the eavesdropper. Just like hash functions, BRI functions require a random seed known to the sender and receiver, which is why we provide it as common randomness. Providing the seed via common randomness makes construction easier and the proof conceptually clear. However, the assumption of common randomness as an additional resource is quite strong. In the end of this section, we prove that the random seed can be generated by the sender and be made known to the receiver using the channel without sacrificing capacity, a process known as derandomization.

An approach achieving the semantic secrecy rate using “standard” secure codes was delivered in Ref. 17, where the result of Ref. 3 was extended. In Ref. 17, it has been demonstrated how to obtain a semantic secure code from a “standard” secure code by using a hash function when the “standard” secure code is a linear code and the channel is an additive channel. References 17 and 38 extended this technique to deliver an explicit construction of a secrecy transmission code with strong secrecy for a general classical channel. In addition, this method also works for a continuous input alphabet. The same technique can be easily extended to a classical-quantum code (cf. Ref. 39). This technique has been also applied in Ref. 12 for additive fully quantum channels when the eavesdropper has access to the whole environment. Together with the expurgation technique (cf. Sec. IV), these results deliver an extend proof for Theorem 4.1.

Our results, both for classical-quantum channels and for fully quantum channels with eavesdropper having access to the whole environment, are more general since our techniques deliver an explicit construction of a secrecy transmission code with semantic secrecy using any code, ensuring strong security on any wiretap channel.

To our knowledge, we are the first to show that every code with or without common randomness that achieves strong secrecy has a subcode with the same asymptotic rate, which achieves semantic secrecy measured in terms of Holevo information. Thus, the passing from the strongly secret code to the semantically secret subcode is highly nonconstructive. An important aspect of THIS modular construction using BRI functions is that there is hope that it can be implemented in practice.

We will define what biregular irreducible (BRI) functions are in this subsection and prove the key properties that we will use to achieve semantic security. The properties we will prove are independent of communication problems such as the classical-quantum wiretap channels we consider. They are simply related to the structure of BRI functions and how they are used as an input to classical-quantum channels. Thus, the channels and the input spaces in this subsection are not to be confused with the actual wiretap channel and its inputs, as will be made clearer below.

We will be looking at families of functions fs(x), namely, functions of two inputs f:S×XN, and at their preimages in X,
fs1(m)xX:fs(x)=m,
where sS will be the seeds in construction of the modular codes.

Definition 5.1

(biregular functions). Let S, X, N be finite sets. A function f:S×XN is called biregular if there exists a regularity set MN such that for every mM, the following holds:

  • dS|{x:fs(x)=m}|=|fs1(m)| is non-zero and independent of s;

  • dX|{s:fs(x)=m}| is non-zero and independent of x.

For any biregular function f:S×XN and any mM, we can define a doubly stochastic matrix Pf,m14 with coefficients defined as
Pf,m(x,x)1dSdX|{s:fs(x)=fs(x)=m}|.
(32)
In other words, Pf,m(x, x′) is the normalized number of seeds sS such that both x and x′ are in the preimage fs1(m). Since Pf,m is stochastic, its largest eigenvalue is 1 and we define λ2(f, m) to be the second largest singular value of Pf,m.

Definition 5.2

(BRI functions). Let S, X, N be finite sets. A biregular function f:S×XN is called irreducible if 1 is a simple eigenvalue of Pf,m, namely, if λ2(f, m) < 1, for every mM.

Note that dS and dX might depend on m. However, for the known BRI function construction, these are, indeed, a constant parameter.14 

Biregularity puts a strong restriction on the behavior of the function. In particular, any m is a possible output of any s or x with the right (s, x) pair. If for a fixed m, we consider the incidence matrix Isx=δm,fs(x), which we can think of it as representing the m section of the graph of fs(x), then we can visualize items (a) and (b) as Isx having the same number of 1’s in each row, and similarly in each column. For example, ignoring Definition 5.2 and omitting the zeros, a possible Isx for a given m might look like
XS111111111111111111111111,
with dS=4 and dX=3. An important consequence is the following relation:
dX|X|=dS|S|,
(33)
easily derived from
|{(s,x):fs(x)=m}|=s|x:fs(x)=m|=sdS=dS|S|=x|s:fs(x)=m|=xdX=dX|X|.
In Ref. 14, it was shown that for every d ≥ 3 and kN, there exists a BRI function f:S×SM, satisfying |S|=2kd, |M|=2k, and
λ2(f,m)4d.
(34)
For our constructions, the above is all we need to know; we will not need to know how these functions are constructed.
The above are key features that are used to provide security. BRI functions play the equivalent role of hash functions for strong secrecy, so we need to show how they can be used to reduce the Holevo information at the output of the channel, which is what we will do in the remainder of this subsection. This channel must not be confused with the wiretap, which we will consider only later in Subsection V B. For the rest of this subsection, we fix a finite set X; a quantum system H; a classical-quantum channel
V:XS(H),
where V is not necessarily a wiretap; and the random variables
S,uniform random variable over the seed S,M,random variable over the message M,
which are always assumed to be independent. In Subsection V B, V will actually be the composition of the encoder of the transmission code with the actual wiretap; thus, X will be the message space of the transmission code. The space M will be the message space of the wiretap code, and S will be the space of the common randomness. Given a seed s, the encoding of a message m happens by picking an uniformly random element of the preimage fs1(m). The definition of BRI functions and these conditions, then, are such that fixing the message and choosing the seed at random produce a uniformly random encoding, as will be explained now more precisely. For this purpose, with some abuse of notation, we will allow classical-quantum channels to take subsets as inputs, with the convention that the resulting state is the uniform mixture over the outputs of the elements in the set. Namely, for DX, we will define
V(D)1|D|xDV(x).
In particular, the above equation defines
Vfs1(m)=1dSxfs1(m)V(x),V(X)=1|X|xXV(x),
which we will repeatedly use. In particular, from Eq. (33), stating that dX|X|=dS|S|, it follows immediately that
P[fS1(m)=x]=1|S|s1dS1{fs(x)=m}=dXdS|S|=1|X|,
(35)
and thus, for any mM,
ESVfS1(m)=V(X),
(36)
which means that not knowing the seed makes the output independent of the message.

We can now start bounding the information at the output of the channel V and ultimately will need to be able to show the semantic secrecy conditions. As said, we focus for now on the common-randomness semantic security, Eq. (16) of Definition 3.7, which means bounding the leakage defined in Eq. (3) of Definition 3.3. We do this, in general, for the output of any channel, irrespective of actual encodings, and, therefore, will upper bound a general leakage χM;S,VfS1. We begin by converting the leakage from a Holevo quantity to a relative entropy.

Lemma 5.3.
For any random variable M over M independent of the uniform seed S, it holds that
χ(M;S,VfS1)maxmMESDVfS1(m)V(X).

Proof.
Denote σs,mVfs1(m). We have
χ(M;S,VfS1)=1SsS1Mmσs,m1S1MsmSσs,m=S1S1Msmσs,m1S1MsmSσs,mχ(S;VfS1)=χ(M,S;VfS1)χ(S;VfS1).
Therefore,
χ(M;S,VfS1)χ(M,S;VfS1)=EMESDVfS1(M)EMESVfS1(M)maxmMESDVfS1(m)EMESVfS1(M).
The equality holds because of Eq. (2), where we consider the classical-quantum channel S×XS(H) that maps s,mVfs1(m). It only remains to show that
EMESVfS1(M)=V(X),
but this follows immediately from Eq. (36), namely, from ESVfS1(m)=V(X), and the proof is concluded.

For the next step, we will define subnormalized classical-quantum channels. Later, we will project onto the typical subspace and discard the rest.

Definition 5.4.
Let ϵ ≥ 0. An ϵ-subnormalized classical-quantum channel V:XL(H) is a map satisfying V′(x) ≥ 0 and 1 − ϵ ≤ trV′(x) ≤ 1 for all xX. Since for ϵ > ϵ, an ϵ-subnormalized channel is also ϵ-subnormal, we call all 0-subnormalized classical-quantum channels simply subnormal. Now, let V:XS(H) be a classical-quantum channel. Let V:XL(H) be a subnormalized classical-quantum channel. We say that
VV
if V′(x) ≤ V(x) for all xX.

The ordering definition reflects what we obtain when we project a channel on a subspace, and we obtain a subnormalized channel that is less than the original channel in an operator ordering sense. When we project onto the typical subspace, we only change the channel a little, and we want to make sure that our upper bound only changes a little. This is the statement of the next lemma.

Lemma 5.5.
Let V: XS(H) be a classical-quantum channel. Let ϵ > 0. Let V:XL(H) be an ϵ-subnormalized classical-quantum channel such that V′ ≤ V. Then, for any fixed mM, it holds that
ESDVfS1(m)V(X)ESDVfS1(m)V(X)+ϵlog|X|dS.

Proof.
Let ρ, ρ′, σ, and σ′ be subnormalized quantum states on the same system such that the sum is a normalized state, namely, so that tr(ρ + ρ′) = tr(σ + σ′) = 1. Consider classical-quantum states of the form 00ρ+11ρ and 00σ+11σ. By monotonicity of the relative entropy (cf. Sec. II) under the trace, we note that
D(ρ+ρσ+σ)D(00ρ+11ρ00σ+11σ)=D(ρσ)+D(ρσ).
We define VΔVV′ and apply the above equation to
ρVfs1(m),ρVΔfs1(m),σV(X),σVΔ(X),
which satisfy Vfs1(m)=ρ+ρ and V(X)=σ+σ, and we obtain
DVfs1(m)V(X)DVfs1(m)V(X)+DVΔfs1(m)VΔ(X).
(37)
Note that
VΔfs1(m)=1dSxfs1(m)VΔ(x)1dSxXVΔ(x)=|X|dSVΔ(X).
This implies that supp(VΔfs1(m))supp(VΔ(X)) and, by operator monotonicity of the logarithm (cf. Ref. 20), that log(VΔfs1(m))log(VΔ(X)). Thus,
DVΔfs1(m)VΔ(X)=tr[VΔfs1(m)logVΔfs1(m)logVΔ(X)]trVΔfs1(m)log|X|dSVΔ(X)logVΔ(X)=trVΔfs1(m)log|X|dSϵlog|X|dS,
where we used that trVΔfs1(m)ϵ. Plugging this into Eq. (37), we obtain the claim.

Lemma 5.6.
Let ϵ > 0, and let V:XS(H) be an ϵ-subnormalized classical-quantum channel. For any fixed mM, it holds that
ESDVfS1(m)V(X)logESexpD2VfS1(m)V(X)+ϵ.

Lemma 5.6 is just the quantum version of Lemma 23 in Ref. 14 and can be shown by similar techniques. For the sake of completeness, we deliver a proof here.

Proof.
By the ordering relation and the convergence of the α-Rényi relative entropy for quantum states we mentioned in Sec. II, it holds that D(ρσ) ≤ D2(ρσ), and thus, we can bound any relative entropy term D(), where p and q are probabilities and ρ and σ are states such that supp(ρ) ⊂ supp(σ) with the 2-Rényi relative entropy as follows [note that Eq. (38) holds trivially if supp(ρ) is not in supp(σ)]:
D(pρqσ)=pD(ρσ)+plogpqpD2(ρσ)+plogpq=ptrlog[ρ2σ1]+plogpq=ptrlog[(pρ)2(qσ)1]plogp=pD2(pρqσ)+(1p)D2(pρqσ)+(1p),
(38)
where we used −p log p ≤ 1 − p.
We apply this to
pρVfs1(m)qσV(X),
which gives
1p=1trVfs1(m)ϵ,
and thus,
D(Vfs1(m)V(X))D2Vfs1(m)V(X)+ϵ.
We apply the expectation value on both sides, and by the convexity of the exponential function, we have
ESD(Vfs1(m)V(X))ESD2VfS1(m)V(X)+ϵlogESexpD2VfS1(m)V(X)+ϵ.
This concludes the proof.□

Lemma 5.7.
Let V:XL(H) be a subnormalized classical-quantum channel. For every mM, we have
ESexpD2VfS1(m)V(X)λ2(f,m)rank[V(X)]maxxXV(x)+1,
(39)
where λ2(f, m) is the second largest singular value of Pf,m defined in Eq. (32), and the expectation value is over uniformly random sS.

Lemma 5.7 is a form of leftover hash lemma for classical-quantum channels.

Proof.
Recall that if supp(ρ) ⊂ supp(σ), then expD2ρσ=trρ2σ1, where σ−1 is the pseudo-inverse. By linearity, the definition of the BRI function and by expanding the mixtures for every m, we obtain
ESexpD2VfS1(m)V(X)=EStr(VfS1(m))2V(X)1=1|S|sStr1dS2x,xXV(x)δm,fs(x)δm,fs(x)V(x)(V(X))1=dX|S|dSx,xXtrV(x)1dSdXsSδfs(x),mδfs(x),mV(x)(V(X))1=1|X|x,xXtrV(x)Pf,m(x,x)V(x)(V(X))1,
where we applied the expression of Pf,m from Definition 5.2 and, then, Eq. (33).
Let now ρ and σ be two states with supp(ρ) ⊂ supp(σ). Let {vi} be an orthonormal basis of eigenvectors to the non-zero eigenvalues of σ, and let ρij=viρvj. Then,
trρ2σ1=ijρijρjiσii1.
Note that supp(V(x))supp(V(X)) for all x, so we can apply this to the above. Let {λi : i} be the non-zero eigenvalues of V(X) with a set of orthonormal eigenvectors {|vi⟩ : i}. We now use the notation Vij for the functions Vij(x)=viV(x)vj. Because V′(x) are Hermitian, we have Vij=Vji*, and thus, we can write
ESexpD2VfS1(m)V(X)=1|X|x,xXijVij(x)Pf,m(x,x)Vji(x)1Vii(X)=1|X|ij1Vii(X)VijPf,mVij,
(40)
where Vij are complex vectors in CX.
We now use the following well-known result (see, e.g., Ref. 40) (originally stated for real vectors—see Lemma 5.9 for the generalization of the proof to complex vectors). Pf,m is a symmetric stochastic matrix in an |X|-dimensional real space with λ2(f, m) < 1 denoting the second-largest eigenvalue. By construction and assumption, 1 is the largest eigenvalue, and it is simple (non-degenerate). Then, for any two vectors in ω and ω′ in this space, it holds that
ωPf,mωλ2(f,m)ωω+ω11ω,
(41)
where 1 is the normalized all-one vector, namely, 1=1|X|(1,,1).
We, thus, have
ESexpD2VfS1(m)V(X)ij|X|1Vii(X)[λ2(f,m)VijVij+Vij11Vij].
However, note that by construction,
Vij11Vij=|X|Vij*(X)Vij(X)=|X|δij(Vii(X))2,
because the choice of basis is an eigenbasis of V(X). We can, thus, simplify the expression to
ESexpD2VfS1(m)V(X)1|X|λ2(f,m)ijVijVijVii(X)+iVii(X)=1|X|λ2(f,m)ijxVij(x)Vji(x)Vii(X)+tr[V(X)]=1|X|λ2(f,m)xtr[V2(x)(V(X))1]+1.
Now, we focus on xtr[V2(x)(V(X))1]. We repeatedly use the cyclic property of the trace and tr AB ≤ ‖Atr|B|, together with the positivity of the states, to obtain the following:
xtr[V2(x)(V(X))1]xV(x)trV(x)(V(X))1V(x)maxxXV(x)xtr[V(x)(V(X))1]=maxxXV(x)|X|tr[V(X)(V(X))1]=maxxXV(x)|X|rank(V(X)).
(42)
Now, we join everything together and obtain
ESexpD2(VfS1)(m)V(X)λ2(f,m) rank[V(X)]maxxXV(x)+1,
(43)
as claimed.□

Note that the bound in Lemma 5.7 is technically different from the classical version in Ref. 14, Lemma 26, which bounds the leakage in terms of a max mutual information. As for now, we are only able to prove the lemma for finite-dimensional quantum systems, while the classical version is also valid for infinite classical systems.

Remark 5.8.

Using different types of functions instead of BRI functions, Hayashi and Matsumoto showed Ref. 15 , Theorem 17 and Lemma 21, which is a result similar to Lemma 5.7 in the case of a single message [i.e., for resolvability (Ref. 15 , Theorem 17)] and for ordinary classical channels. It is straightforward to extend this to the case of several messages and subnormalized channels. The function class of Hayashi and Matsumoto is defined via the function inverses in terms of group homomorphisms. The example given in Ref. 41  uses a short seed for strong security. It is still open whether the seed required in Ref. 15  for semantic secrecy can be as short as for the BRI security functions in this work. The size of the seed is a part of the complexity of the code once it is derandomized; it may partially influence the efficiency of the code and the finite rates achieved for a finite number of channel uses. During the completion of this work, new efficient functions with an efficient randomness size were proven to achieve semantic security in the classical setting.42  We also expect these functions to provide semantic security for quantum channels.

For completeness, the proof of Eq. (41) follows here. Afterward, we will continue the chain of inequalities and bound the V dependent term of Lemma 5.7.

Lemma 5.9.
Let P be a symmetric stochastic real matrix in a |X|-dimensional complex space, and let eigenvalue 1 be simple. Denote the second-largest eigenvalue modulus P by λ2. For every vector ω in this space, it holds that
ωPωλ2ωω+ω11ω,
where 1 is the normalized all-one vector, namely, 1=1|X|(1,,1).

Proof.
Note that being doubly stochastic implies satisfying P1=1 and 1P=1. We first add and remove the 1 component from ω,
ωPω=(ωω11)P(ω11ω)+ω11Pω+ωP11ωω11P11ω=(ωω11)P(ω11ω)+ω11ω,
because 1 is a simple eigenvalue; in the remaining subspace, the largest eigenvalue is λ2. Since λ2 is positive for such a matrix,40 we have
ωPω(ωω11)λ2(ω11ω)+ω11ω=λ2(ωωω11ω)+ω11ωλ2ωω+ω11ω.

Putting all the results above together, we obtain the following single statement for BRI functions.

Corollary 5.10.
Let ϵ > 0, and let V:XL(H) be an ϵ-subnormalized classical-quantum channel such that V′ ≤ V. For any random variable M over M independent of the uniform seed S, it holds that
χ(M;S,VfS1)1ln2maxmMλ2(f,m)rank[V(X)]maxxXV(x)+ϵ+ϵlog|X|dS.

Proof.
Joining Lemmas 5.3 and 5.5–5.7, we directly obtain
χ(M;S,VfS1)log1+maxmMλ2(f,m)rank[V(X)]maxxXV(x)+ϵ+ϵlog|X|dS.
The result follows simply from log(1 + x) ≤ x/ln 2.□

In Sec. V B, we will finally define what a code using BRI functions looks like. The chain of lemmas above will allow us to prove that we can achieve capacity with such codes. There, the classical-quantum channels of the lemmas above will be the classical-quantum channel generated by a transmission code around the actual wiretap channel. Hence, V above should not be confused with the actual wiretap, but instead, it will be the composition of the wiretap Vn and the encoder. This is why all the lemmas above are single letter.

We can now prove the final statements. As will be noted in the next proof, BRI functions are not used to upgrade the strong secrecy achieved by, e.g., a hash function or any strong-secrecy capacity-achieving code. Instead, the BRI functions replace hash functions and directly produce a semantic-secrecy capacity-achieving code out of a capacity-achieving error-correction code.

Let us now fix an actual wiretap channel. We fix a finite space X, two finite quantum systems H and H′, and a classical-quantum wiretap channel (W, V) defined as the classical-quantum channels W:XS(H) and V:XS(H). For reference, recall that an (n,|S|,|M|) common-randomness code is a finite subset Cs={(Ems,Dms):mM}:sS of the set of (n,|M|) codes, labeled by a finite set S, the common randomness. We, then, define BRI modular codes as follows:

Definition 5.11.

Let S and M be the finite sets for the space of the seeds, messages, and encodings. Let xcn,DccC be an (n,|C|) code for W, and let f:S×CM be a BRI function. We define their BRI modular code to be the common-randomness code such that for every seed sS and message mM, we have the following:

  1. The encoder Ems is the uniform distribution over xcn:cfs1(m).

  2. The decoder is Dms=cfs1(m)Dc.

Note that, in practice, for the decoder, it will be more straightforward to simply decode c and, then, compute directly fs(c) instead of implementing the coarse grained decoding operators.

Theorem 5.12.

For any probability distribution P over X, we have the following:

  1. There exist BRI modular codes achieving the semantic secrecy rate χ(PW) − χ(PV) using codes achieving the transmission rate χ(PW).

  2. The same rate is achievable with their derandomized codes.

Note that this theorem implies that the classical-quantum wiretap channel capacity can also be achieved with such modular codes, and thus, in particular, the second point of the theorem also provides an alternative proof for Theorem 4.1 with the BRI scheme in the case of finite-dimensional channels. Indeed, since the theorem holds for all P, we can also directly achieve the supremum. This single letter formula, then, implies the multi-letter formula by standard argument. More precisely, we can write the classical-quantum wiretap capacity as
Cw(W,V)=supnN1nmaxECw1(EWn,EVn),
(44)
where
Cw1(W,V)maxP(χ(P;W)χ(P;V)).
(45)
Then, the standard argument, which we reproduce in Lemma 5.13 for completeness, shows that if Cw1 is achievable by a class of codes, it automatically follows that Cw is also achievable.

Finally, the finite block length results can be extracted by looking at Eqs. (47) and (51) in the proof, and they depend on the finite-block length parameters of the chosen transmission code. Similar finite-block length results for derandomized codes are found in Eq. (53).

Proof.

Fix the arbitrary distribution P and fix any ϵ > 0, and let δ be a positive number, which will later be chosen as a function of ϵ. By Refs. 43–45, there exists γ > 0 such that for sufficiently large n, there exists an (n,|X|) transmission code {Ex,Dx:xX} for W whose rate is at least χ(PW) − ϵ/2, whose maximal error probability is at most 2, and whose codewords, moreover, are all δ-typical, namely, the encoders satisfy E(TP,δn|x)=1 for all messages xX. (For the definition of TP,δn, see the  Appendix. For an explicit proof that the error can be made to decrease exponentially, see, e.g., Ref. 37, Lemma 4.1).

We have to find a suitable BRI function in order to ensure semantic security. By enlarging n if necessary, we have enough flexibility to choose integers k and d satisfying
nχ(P;V)+ϵ4logdnχ(P;V)+ϵ2,nχ(P;W)ϵ2k+logdlog|X|.
As previously mentioned, we can choose BRI functions f:S×SM from Ref. 14, satisfying |S|=2kd, |M|=2k, and
λ2(f,m)4d42nχ(P;V)+ϵ4.
(46)
We think of S as a subset of X and define CsEms,Dms:mM:sS to be the BRI modular code constructed from CEs,Ds:sS and f. Its rate clearly satisfies
log|M|n=knχ(P;W)χ(P;V)ϵ,
(47)
and the maximal error probability is no larger than that of the transmission code, i.e., it is upper-bounded by 2. In order to evaluate the security of the BRI modular code, we define the classical-quantum channel U = EVn and upper-bound
χ(M;S,ESVn)=χ(M;S,UfS1)
for any random variable M on M independent of the uniform seed S.
We introduce a subnormalized classical-quantum channel V:TP,δnL(Hn) by defining
V(xn)=ΠPV,δnΠV,δn(xn)Vn(xn)ΠV,δn(xn)ΠPV,δn.
(48)
For the definition of ΠPV,δn and ΠV,δn(xn), see the  Appendix. By Lemma A.1, V′ is a 2(δ) subnormalized classical-quantum channel satisfying V′(xn) ≤ Vn(xn) for all xnTP,δn. Since all codewords are contained in TP,δn,
UEV
is a 2(δ) subnormalized classical-quantum channel satisfying U′ ≤ U, and Corollary 5.10 and Eq. (46) imply
χ(M;S,ESVn)4dln2rank[U(X)]maxxXU(x)+2nη(δ)(k+1).
(49)
Since the inputs are chosen from a set of typical sequence TP,δn, Lemma A.1 can be used to find
rank[U(X)]maxxXU(x)rank[V(Xn)]maxxnTP,δV(xn)2n(χ(P,V)+γ(δ)).
Inserting this and Eq. (46) into Eq. (49) gives
χ(M;S,ESVn)4ln22n(ϵ/4γ(δ))+(k+1)2nη(δ).
(50)
Since k is n times the rate of our common-randomness code, thus by Theorem 4.1, it cannot grow faster than nCw(W, V), and thus,
χ(M;S,ESVn)4ln22n(ϵ/4γ(δ))+(nCw(W,V)+ϵ+1)2nγ.
(51)
Now, choose δ small enough for γ″(δ) < ϵ/4 to hold. Then, this upper bound tends to zero at exponential speed with n. Hence, as the block length n increases, our BRI modular code {Cs:sS} achieves the rate χ(PW) − χ(PV) with the exponentially decreasing error probability and leakage.

As previously mentioned, the codes we constructed use common randomness. This allows us to simply provide the seed needed by the BRI modular code and keep the proof focused on the properties of the BRI function. We now derandomize these codes. Note, however, that this is a standard procedure and does not really depend on the structure of the BRI modular codes, but simply in the scaling of its size and errors.

We derandomize, according to Definition 3.8, the BRI modular code above. We set n′ = n and share the seed with the same transmission code C used to construct the BRI modular code. For the number of reuses of the seed, we need to choose a sequence (N(n))nN such that 1 ≪ N(n) ≪ 2, N(n)2n(ϵ/4γ(δ)), and N(n)(nCw(W,V)+ϵ+1)12nγ. For simplicity, it suffices to choose N(n) = n − 1, and thus, we define C̄ as the n − 1-derandomized code constructed from C and Cs:sS. The total number of channels used is, then, n2. By Lemma 3.9, we have an (n2,n2nn2R,2nγn) semantic-secrecy code. Since now we have |Mn1|=2(n1)k, similar to (47), we have
log|Mn1|n2=n1nkn11nχ(P;W)χ(P;V)ϵ.
(52)
When we set Eq. (51) into (17), we have
χ(M;ĒVn2)4nln22n(ϵ/4γ(δ))+n(nCw(W,V)+ϵ+1)2nγ.
(53)
The same argument as for the BRI modular code now works. Since δ was chosen to satisfy γ″(δ) < ϵ/4, this upper bound still tends to zero with n, and our derandomized BRI modular code achieves the rate χ(PW) − χ(PV).□

The following is the standard statement that any single letter achievable rate implies a multi-letter achievable rate. We give a proof for completeness.

Lemma 5.13.
If C1(W, V) is an achievable rate, then
C(W,V)supnN1nmaxEC1(EWn,EVn),
(54)
where the maximum is over finite sets A and stochastic mappings E:AXn, is also an achievable rate.

Proof.

In order to show that Csem(W, V) is also achievable, given that C1(W, V) is achievable, we pick any n and E. We obtain a new classical-quantum wiretap channel (EWn, EVn) for which we know that the rate C1(EWn, EVn) is achievable. Specifically, for any ɛ > 0 and sufficiently large n′, there exists an [n′, C1(EWn, EVn) − ϵ, ϵ] code Em,Dm:mM for (EWn, EVn). The error and leakage only directly depend on the encoder and channels compositions E(EWn)n and E(EVn)n; thus, they do not change, and thus, the code EmEn,Dm:mM is a [nn′, (C1(EWn, EVn) − ϵ)/n, ϵ] code for (W, V). Therefore, C1(EWn, EVn)/n is achievable for (W, V). Since the above holds for all n and E, taking the supremum concludes the proof.□

In this section, we showed that there exist modular coding schemes constructed from suitable transmission codes and BRI functions, which achieve the security capacity of the classical-quantum wiretap channel and provide semantic security. Compared to the results of Sec. IV, the message sets of these modular codes are given explicitly via the BRI function. In particular, they do not depend on the wiretap channel.

In classical information, not only discrete channels but also continuous channels are important subjects of study. In Ref. 14, semantic security was demonstrated for both discrete channels and continuous channels. Thus, it will be very interesting to analyze if we can extend these results to continuous quantum channels. As mentioned above, the results of Ref. 14 show how a non-secure code can be transformed into a semantic secure code. Thus, it will be a promising next step to analyze if these results can be extended to a non-secure code for continuous quantum channels, e.g., classical-quantum Gaussian channels, which are continuous-variable classical-quantum channels undergoing a Gaussian-distributed thermal noise.46 Furthermore, similar to the discrete channels, one can consider that the eavesdropper will have access to the environment’s final state47 for continuous quantum channels as well. Thus, it will be an interesting further step to analyze if the results of Sec. III D can be extended to continuous quantum channels. Further discussions will be the extension of these techniques on more complicated networks, e.g., arbitrarily varying wiretap channels. This is also currently still open for classical networks.

Holger Boche, Minglai Cai, Christian Deppe, and Roberto Ferrara were supported by the German Federal Ministry of Education and Research (BMBF) through Grant Nos. 16KISQ028 (C.D., R.F.), 16KISQ020 (H.B.), 16KIS0948 (H.B., M.W.), and 16KISQ038 (C.D., R.F.). We acknowledge the Research Hub 6G-life under Grant No. 16KISK002 for their support to Holger Boche and Christian Deppe. Holger Boche and Moritz Wiese were supported by the German Research Foundation (DFG) within Germany’s Excellence Strategy—Grant No. EXC 2092 CASA-390781972.

The authors have no conflicts to disclose.

Holger Boche: Writing – original draft (supporting); Writing – review & editing (supporting). Minglai Cai: Writing – original draft (equal); Writing – review & editing (equal). Christian Deppe: Writing – original draft (equal); Writing – review & editing (equal). Roberto Ferrara: Writing – original draft (equal); Writing – review & editing (equal). Moritz Wiese: Writing – original draft (equal); Writing – review & editing (equal).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

We now bound the V dependent term of Lemma 5.7. Before stating the actual lemma, we need to recall some facts about typical sequences and typical operators as can be found, e.g., in Ref. 22.

Let X be a finite set. Let P be a probability function on X. Let δ > 0 and nN. The set TP,δn of typical sequences of P consists of those xnXn satisfying the following.

  • 1nN(axn)P(x)δ|X| for all aX.

  • N(a|xn) = 0 if P(a) = 0 for all a in X,

where N(axn) is the number of occurrences of the symbol aX in the sequence xn.

Let H be a finite-dimensional complex Hilbert space. Let ρS(H) be a state with spectral decomposition ρ=xP(x)xx. For any other basis of eigenvectors, the same statements will be valid. The δ-typical subspace is defined as the subspace spanned by xn:xnTP,δn, where xni=1nxi. The orthogonal projector onto the δ-typical subspace is given by
Πρ,δn=xnTP,δnxnxn
and satisfies the following properties. There are positive constants α(δ), β(δ), and γ(δ) depending on δ such that for large enough n,
tr(ρnΠρ,δn)>12nα(δ),
(A1)
2n(S(ρ)β(δ))tr(Πρ,δn)2n(S(ρ)+β(δ)),
(A2)
2n(S(ρ)+γ(δ))Πρ,δnΠρ,δnρnΠρ,δn2n(S(ρ)γ(δ))Πρ,δn.
(A3)
Similarly, let X be a finite set and H be a finite-dimensional complex Hilbert space. Let V:XS(H) be a classical-quantum channel. For aX, suppose that V(a) has the spectral decomposition V(a) = ∑jV(j|a)|j⟩⟨j| for a stochastic matrix V(·|·). The α-conditional typical subspace of V for a typical sequence an is the subspace spanned by aXjIa,jIaTV(|a),δIa. Here, Ia{i{1,,n}:ai=a} is an indicator set that selects the indices i in the sequence an = (a1, …, an) for which the ith symbol ai is equal to aX. The subspace is often referred to as the α-conditional typical subspace of the state Vn(an). The orthogonal subspace projector that projects onto it is defined as
ΠV,αn(an)=aXjIaTV(an),αIa|jIajIa|.
For anTP,αn, there are positive constants β(α)′, γ(α)′, and δ(α)′ depending on α such that
trVn(xn)ΠV,δn(xn)>12nα(δ),
(A4)
2n(S(V|P)β(δ))trΠV,δn(xn)2n(S(V|P)+β(δ)),
(A5)
2n(S(V|P)+γ(δ))ΠV,δn(xn)ΠV,δn(xn)Vn(xn)ΠV,δn(xn)2n(S(V|P)γ(δ))ΠV,δn(xn).
(A6)
For the classical-quantum channel V:XS(H) and a probability distribution P on X, we define a quantum state PV ≔ ∑aP(a)V(a) on S(H). Clearly, one can then speak of the orthogonal subspace projector ΠPV,δ, fulfilling Eqs. (A1)–(A3). For ΠPV,δ, there is a positive constant α(δ)″ such that for every xnTP,δn, the following inequality holds:
trVn(xn)ΠPV,δn12nα(δ).
(A7)

Lemma A.1.
Let V:XS(H) be a classical-quantum channel. For any δ > 0 and probability distribution P over X, define the subnormalized classical-quantum channel V:TP,δnL(Hn) by
V(xn)ΠPV,δnΠV,δn(xn)Vn(xn)ΠV,δn(xn)ΠPV,δn.
(A8)
We assume that the inputs are chosen from a set of typical sequence TP,δn with a probability distribution P and a positive δ. Then, V′ ≤ Vn. Moreover, there exist positive η(δ) and γ″(δ) such that if n is sufficiently large, Vis a 2(δ)-subnormalized classical-quantum channel and
rank[V(Xn)]maxxnTP,δnV(xn)2nχ(P;V)+nγ(δ).
(A9)

Proof.
It is obvious that V′ ≤ Vn. To check that the trace of V′ is close to 1, let xnTP,δn and define
V(xn)=ΠV,δn(xn)VnΠV,δn(xn).
Clearly,
tr(V(xn))=tr(V(xn))tr((IΠPV,δn)V(xn)).
By Eq. (A4), tr(V″(xn)) ≥ 1 − 2(δ). In addition, it is clear that Vn(xn) commutes with ΠV,δn(xn) and that V″(xn) ≤ Vn(xn). Therefore,
tr((IΠPV,δn)V(xn))tr((IΠPV,δn)Vn(xn))2nα(δ).
Altogether, if we set
η(δ)=min{α(δ),α(δ)},
we obtain that
tr(V(xn))12nη(δ),
so V′ is a 2(δ)-subnormalized version of Vn.
Now, we bound maxxnTP,δnV(xn). By Eq. (A6), for any xnTP,δn, we have
V(xn)ΠV,δn(xn)Vn(xn)ΠV,δn(xn)
(A10)
2n(S(V|P)γ(δ))ΠV,δn(xn),
(A11)
thus implying that
maxxnTP,δnV(xn)2n(S(V|P)γ(δ)).
(A12)
We finally bound rank[V(Xn)]. By Eq. (A2), we have
rank[V(Xn)]rank[ΠPV,δn]=trΠPV,δn2n(S(PV)+β(δ)).
(A13)
We combine Eqs. (A12) and (A13) and obtain
rank[V(X)]maxxnTP,δnV(xn)2n(χ(P;V)+β(δ)+γ(δ)).
(A14)

1.
A. D.
Wyner
, “
The wire-tap channel
,”
Bell Syst. Tech. J.
54
(
8
),
1355
1387
(
1975
).
2.
N.
Cai
,
A.
Winter
, and
R. W.
Yeung
, “
Quantum privacy and quantum wiretap channels
,”
Probl. Inf. Transm.
40
(
4
),
318
336
(
2004
).
3.
I.
Devetak
, “
The private classical information capacity and quantum information capacity of a quantum channel
,”
IEEE Trans. Inf. Theory
51
(
1
),
44
55
(
2005
).
4.
I.
Csiszár
, “
Almost independence and secrecy capacity
,”
Probl. Inf. Transm.
32
(
1
),
40
47
(
1996
).
5.
U. M.
Maurer
, “
The strong secret key rate of discrete random triples
,” in
Communications and Cryptography: Two Sides of One Tapestry
(
Kluwer Academic Publishers
,
1994
), pp.
271
285
.
6.
M.
Bellare
,
S.
Tessaro
, and
A.
Vardy
, “
Semantic security for the wiretap channel
,” in
Advances in Cryptology – CRYPTO 2012
, edited by R. Safavi-Naini and R. Canetti, Lecture Notes in Computer Science Vol. 7417 (
Springer
,
Berlin, Heidelberg
,
2012
).
7.
M.
Hayashi
, “
Upper bounds of eavesdropper’s performances in finite-length code with the decoy method
,”
Phys. Rev. A
76
,
012329
(
2007
);
8.
R.
Ahlswede
and
G.
Dueck
, “
Identification via channels
,”
IEEE Trans. Inf. Theory
35
(
1
),
15
29
(
1989
).
9.
R.
Ahlswede
and
Z.
Zhang
, “
New directions in the theory of identification via channels
,”
IEEE Trans. Inf. Theory
41
(
4
),
1040
1050
(
1995
).
10.
H.
Boche
,
C.
Deppe
, and
A.
Winter
, “
Secure and robust identification via classical-quantum channels
,”
IEEE Trans. Inf. Theory
65
(
10
),
6734
6749
(
2019
).
11.
M.
Hayashi
, “
General non-asymptotic and asymptotic formulas in channel resolvability and identification capacity and its application to wire-tap channel
,”
IEEE Trans. Inf. Theory
52
(
4
),
1562
1575
(
2006
).
12.
R.
Bassoli
,
H.
Boche
,
C.
Deppe
,
R.
Ferrara
,
F. H. P.
Fitzek
,
G.
Janssen
, and
S.
Saeedinaeeni
Quantum communication networks
,” in
Foundations in Signal Processing, Communications and Networking
(Springer-Verlag, 2021), Vol. 23 1st Edition.
13.
M.
Hayashi
, “
Quantum wiretap channel with non-uniform random number and its exponent and equivocation rate of leaked information
,”
IEEE Trans. Inf. Theory
61
(
10
),
5595
5622
(
2015
).
14.
M.
Wiese
and
H.
Boche
, “
Semantic security via seeded modular coding schemes and Ramanujan graphs
,”
IEEE Trans. Inf. Theory
67
(
1
),
52
80
(
2021
).
15.
M.
Hayashi
and
R.
Matsumoto
, “
Secure multiplex coding with dependent and non-uniform multiple messages
,”
IEEE Trans. Inf. Theory
62
(
5
),
2355
2409
(
2016
).
16.
J. M.
Renes
and
R.
Renner
, “
Noisy channel coding via privacy amplification and information reconciliation
,”
IEEE Trans. Inf. Theory
57
(
11
),
7377
7385
(
2011
).
17.
M.
Hayashi
, “
Exponential decreasing rate of leaked information in universal random privacy amplification
,”
IEEE Trans. Inf. Theory
57
(
6
),
3989
4001
(
2011
).
18.
S.
Guha
,
J. H.
Shapiro
, and
B. I.
Erkmen
, “
Capacity of the bosonic wiretap channel and the entropy photon-number inequality
,” in
2008 IEEE International Symposium on Information Theory
(
IEEE
,
2008
), pp.
91
95
.
19.
C. H.
Bennett
,
G.
Brassard
, and
J.-M.
Robert
, “
Privacy amplification by public discussion
,”
SIAM J. Comput.
17
(
2
),
210
229
(
1988
).
20.
R.
Bhatia
,
Matrix Analysis
(
Springer-Verlag
,
New York
,
1997
).
21.
M.
Nielsen
and
I.
Chuang
,
Quantum Computation and Quantum Information
(
Cambridge University Press
,
2000
).
22.
M.
Wilde
,
Quantum Information Theory
(
Cambridge University Press
,
2013
).
23.
D.
Petz
, “
Quasi-entropies for finite quantum systems
,”
Rep. Math. Phys.
23
,
57
65
(
1986
).
24.
F.
Buscemi
and
N.
Datta
, “
The quantum capacity of channels with arbitrarily correlated noise
,”
IEEE Trans. Inf. Theory
56
(
3
),
1447
1460
(
2010
).
25.
M.
Mosonyi
and
N.
Datta
, “
Generalized relative entropies and the capacity of classical-quantum channels
,”
J. Math. Phys.
50
,
072104
(
2009
).
26.
T.
Ando
, “
Convexity of certain maps on positive definite matrices and applications to Hadamard products
,”
Linear Algebra Appl.
26
,
203
241
(
1979
).
27.
K. M. R.
Audenaert
and
N.
Datta
, “
α-z-Rényi relative entropies
,”
J. Math. Phys.
56
,
022202
(
2015
).
28.
R.
Ahlswede
,
I.
Bjelaković
,
H.
Boche
, and
J.
Nötzel
, “
Quantum capacity under adversarial quantum noise: Arbitrarily varying quantum channels
,”
Commun. Math. Phys.
317
(
1
),
103
156
(
2013
).
29.
H.
Boche
and
J.
Nötzel
, “
Arbitrarily small amounts of correlation for arbitrarily varying quantum channels
,”
J. Math. Phys.
54
(
11
),
112202
(
2013
).
30.
R.
Ahlswede
, “
Elimination of correlation in random codes for arbitrarily varying channels
,”
Z. Wahrscheinlichkeitstheorie Verw. Geb.
44
,
159
175
(
1978
).
31.
R.
Ahlswede
and
V.
Blinovsky
, “
Classical capacity of classical-quantum arbitrarily varying channels
,”
IEEE Trans. Inf. Theory
53
(
2
),
526
533
(
2007
).
32.
H.
Boche
,
M.
Cai
,
C.
Deppe
, and
J.
Nötzel
, “
Classical–quantum arbitrarily varying wiretap channel: Ahlswede dichotomy, positivity, resources, super activation
,”
Quantum Inf. Process.
15
(
11
),
4853
4895
(
2016
).
33.
C. H.
Bennett
and
G.
Brassard
, “
Quantum cryptography: Public key distribution and coin tossing
,”
Theor. Comput. Sci.
560
(
1
),
7
(
2014
).
34.
A. S.
Holevo
, “
Complementary channels and the additivity problem
,”
Theory Probab. Appl.
51
,
133
143
(
2005
).
35.
V.
Paulsen
,
Completely Bounded Maps and Operator Algebras
(
Cambridge University Press
,
2003
).
36.
H.
Boche
,
M.
Cai
,
C.
Deppe
, and
J.
Nötzel
, “
Classical-quantum arbitrarily varying wiretap channel: Common randomness assisted code and continuity
,”
Quantum Inf. Process.
16
(
1
),
35
(
2016
).
37.
M.
Hayashi
,
Quantum Information Theory
(
Springer-Verlag Berlin Heidelberg
,
2017
).
38.
M.
Hayashi
, “
Tight exponential analysis of universally composable privacy amplification and its applications
,”
IEEE Trans. Inf. Theory
59
(
11
),
7728
7774
(
2013
).
39.
M.
Hayashi
, “
Precise evaluation of leaked information with secure randomness extraction in the presence of quantum attacker
,”
Commun. Math. Phys.
333
(
1
),
335
350
(
2015
).
40.
P.
Brémaud
,
Markov Chains, Gibbs Fields, Monte Carlo Simulation, and Queues
(
Springer-Verlag
,
1999
).
41.
A.
Vazquez-Castro
and
M.
Hayashi
, “
Physical layer security for RF satellite channels in the finite-length regime
,”
IEEE Trans. Inf. Forensics Secur.
14
(
4
),
981
993
(
2018
).
42.
M.
Wiese
and
H.
Boche
, “
Mosaics of combinatorial designs for information-theoretic security
,”
Des., Codes Cryptography
90
,
593
632
(
2022
).
43.
A. S.
Holevo
, “
The capacity of quantum channel with general signal states
,”
IEEE Trans. Inf. Theory
44
,
269
273
(
1998
).
44.
B.
Schumacher
and
M. A.
Nielsen
, “
Quantum data processing and error correction
,”
Phys. Rev. A
54
,
2629
(
1996
).
45.
B.
Schumacher
and
M. D.
Westmoreland
, “
Sending classical information via noisy quantum channels
,”
Phys. Rev.
56
,
131
138
(
1997
).
46.
A. S.
Holevo
,
M.
Sohma
, and
O.
Hirota
, “
Capacity of quantum Gaussian channels
,”
Phys. Rev. A
59
,
1820
(
1999
).
47.
G.
Ruggeria
and
S.
Mancini
, “
Privacy of a lossy bosonic memory channel
,”
Phys. Lett. A
362
(
5–6
),
340
343
(
2007
).
Published open access through an agreement with Technical University of Munich Department of Physics