The theory of open quantum systems is one of the most essential tools for the development of quantum technologies. Furthermore, the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad) master equation plays a key role as it is the most general generator of Markovian dynamics in quantum systems. In this paper, we present this equation together with its derivation and methods of resolution. The presentation tries to be as self-contained and straightforward as possible to be useful to readers with no previous knowledge of this field.

Open quantum system techniques are vital for many studies in quantum mechanics.1–3 This happens because closed quantum systems are just an idealization of real systems (the same happens with closed classical systems), as in Nature nothing can be isolated. In practical problems, the interaction of the system of interest with the environment cannot be avoided and we require an approach in which the environment can be effectively removed from the equations of motion.

The general problem addressed by open quantum theory is sketched in Fig. 1. In the most general picture, we have a total system that conforms a closed quantum system by itself. We are mostly interested in a subsystem of the total one (we call it just “system” instead of “total system”). Therefore, the whole system is divided into our system of interest and an environment. The goal of open quantum theory is to infer the equations of motions of the reduced system from the equation of motion of the total system. For practical purposes, the reduced equations of motion should be easier to solve than the full dynamics of the system. Because of his requirement, several approximations are usually made in the derivation of the reduced dynamics.

FIG. 1.

A total system divided into the system of interest, “system” and the environment.

FIG. 1.

A total system divided into the system of interest, “system” and the environment.

Close modal

One particular, and interesting, case of study is the dynamics of a system connected to several baths modeled by a Markovian interaction. In this case, the most general quantum dynamics is generated by the Lindblad equation (also called Gorini-Kossakowski-Sudarshan-Lindblad equation).4,5 It is difficult to overemphasize the importance of this master equation. It plays an important role in fields as quantum optics,1,6 condensed matter,7–10 atomic physics,11,12 quantum information,13,14 decoherence,15,16 and quantum biology.17–19 

The purpose of this paper is to provide basic knowledge about the Lindblad master equation. In Sec. II, the mathematical requirements are introduced, while in Sec. III, there is a brief review of quantum mechanical concepts that are required to understand the paper. Section IV includes a description of a mathematical framework, the Fock-Liouville space (FLS) that is especially useful to work in this problem. In Sec. V, we define the concept of completely positive and trace-preserving maps (CPT-maps), we derive the Lindblad master equation from two different approaches, and we discus several properties of the equation. Finally, Sec. VI is devoted to the resolution of the master equation using different methods. To deepen the techniques of solving the Lindblad equation, an example consisting of a two-level system with decay is analyzed, illustrating the content of every section. The problems proposed are solved by the use of Mathematica notebooks that can be found in Ref. 20.

The primary mathematical tool in quantum mechanics is the theory of Hilbert spaces. This mathematical framework allows extending many results from finite linear vector spaces to infinite ones. In any case, this tutorial deals only with finite systems and, therefore, the expressions “Hilbert space” and “linear space” are equivalent. We assume that the reader is skilled in operating in Hilbert spaces. To deepen the field of Hilbert spaces, we recommend the book by Debnath and Mikusińki.21 If the reader needs a brief review of the main concepts required for understanding this paper, we may recommend Nielsen and Chuang’s Quantum Computing book.22 It is also required some basic knowledge about infinitesimal calculus, such as integration, derivation, and the resolution of simple differential equations. To help the readers, we have made a glossary of the mostly used mathematical terms. It can also be used as a checklist of terms that the reader should be familiar with.

  • H represents a Hilbert space, usually the space of pure states of a system.

  • |ψH represents a vector of the Hilbert space H (a column vector).

  • ψ|H represents a vector of the dual Hilbert space of H (a row vector).

  • ψ|ϕC is the scalar product of vectors |ψ⟩ and |ϕ⟩.

  • |||ψ⟩|| is the norm of vector |ψ⟩. |||ψ||ψ|ψ.

  • B(H) represents the space of bounded operators acting on the Hilbert space B:HH.

  • 1HB(H) is the identity operator of the Hilbert space H s.t. 1H|ψ=|ψ,  |ψH.

  • |ψϕ|B(H) is the operator such that |ψϕ||φ=ϕ|φ|ψ,  |φH.

  • OB(H) is the Hermitian conjugate of the operator OB(H).

  • UB(H) is a unitary operator iff UU=UU=1.

  • HB(H) is a Hermitian operator iff H = H.

  • AB(H) is a positive operator A>0 iff ϕ|A|ϕ0,  |ϕH.

  • PB(H) is a projector iff PP = P.

  • TrB represents the trace of operator B.

  • ρL represents the space of density matrices, meaning the space of bounded operators acting on H with trace 1 and positive.

  • |ρ⟩⟩ is a vector in the Fock-Liouville space.

  • A|B=TrAB is the scalar product of operators A,BB(H) in the Fock-Liouville space.

  • L̃ is the matrix representation of a superoperator in the Fock-Liouville space.

The purpose of this chapter is to refresh the main concepts of quantum mechanics necessary to understand the Lindblad master equation. Of course, this is NOT a full quantum mechanics course. If a reader has no background in this field, just reading this chapter would be insufficient to understand the remaining of this tutorial. Therefore, if the reader is unsure of his/her capacities, we recommend to go first through a quantum mechanics course or to read an introductory book carefully. There are many great quantum mechanics books in the market. For beginners, we recommend Sakurai’s book23 or Nielsen and Chuang’s Quantum Computing book.22 For more advanced students, looking for a solid mathematical description of quantum mechanics methods, we recommend Galindo and Pascual.24 Finally, for a more philosophical discussion, you should go to Peres’ book.25 

We start stating the quantum mechanics postulates that we need to understand the derivation and application of the Lindblad master equation. The first postulate is related to the concept of a quantum state.

Postulate 1.

Associated with any isolated physical system, there is a complex Hilbert spaceH, known as thestate spaceof the system. The state of the system is entirely described by a state vector, which is a unit vector of the Hilbert space(|ψH).

As quantum mechanics is a general theory (or a set of theories), it does not tell us which is the proper Hilbert space for each system. This is usually done system by system. A natural question to ask is if there is a one-to-one correspondence between unit vectors and physical states, meaning that if every unit vector corresponds to a physical system. This is resolved by the following corollary that is a primary ingredient for quantum computation theory (see Ref. 22, Chapter 7).

Corollary 1.

All unit vectors of a finite Hilbert space correspond to possible physical states of a system.

Unit vectors are also called pure states. If we know the pure state of a system, we have all physical information about it and we can calculate the probabilistic outcomes of any potential measurement (see the next postulate). This is a very improbable situation as experimental settings are not perfect, and in most cases, we have only imperfect information about the state. Most generally, we may know that a quantum system can be in one state of a set |ψi with probabilities pi. Therefore, our knowledge of the system is given by an ensemble of pure states described by the set |ψi,pi. If more than one pi is different from zero, the state is not pure anymore and it is called a mixed state. The mathematical tool that describes our knowledge of the system, in this case, is the density operator (or density matrix),
ρipi|ψiψi|.
(1)
Density matrices are bounded operators that fulfill two mathematical conditions,
  1. A density matrix ρ has unit trace Tr[ρ]=1.

  2. A density matrix is a positive matrix ρ > 0.

Any operator fulfilling these two properties is considered a density operator. It can be proved trivially that density matrices are also Hermitian.

If we are given a density matrix, it is easy to verify if it belongs to a pure or a mixed state. For pure states, and only for them, Tr[ρ2] = Tr[ρ] = 1. Therefore, if Tr[ρ2] < 1, the system is mixed. The quantity Tr[ρ2] is called the purity of the states, and it fulfills the bounds 1dTr[ρ2]1, being d the dimension of the Hilbert space.

If we fix an arbitrary basis |ii=1N of the Hilbert space, the density matrix in this basis is written as ρ=i,j=1Nρi,j|ij| or
ρ=ρ00ρ01ρ0Nρ10ρ11ρ1NρN0ρN1ρNN,
(2)
where the diagonal elements are called populationsρiiR0+ and iρi,i=1, while the off-diagonal elements are called coherencesρi,jCandρi,j=ρj,i*. Note that this notation is base-dependent.Example 1: State of a two-leve system (qubit).

The Hilbert space of a two-level system is just the two-dimensional linear space H2. Examples of this kind of system are 12-spins and two-level atoms. We can define a basis of it by the orthonormal vectors, |0,|1. A pure state of the system would be any unit vector of H2. It can always be expressed as |ψ⟩ = a|0⟩ + b|1⟩ with a,bC s.t. |a|2 + |b|2 = 1.

A mixed state is therefore represented by a positive unit trace operator ρO(H2),
ρ=ρ00ρ01ρ10ρ11=ρ00|00|+ρ01|01|+ρ10|10|+ρ11|11|,
(3)
and it should fulfill ρ00 + ρ11 = 1 and ρ01=ρ10*.End of example.

Once we know the state of a system, it is natural to ask about the possible outcomes of experiments (see Ref. 23, Sec. 1.4).

Postulate 2.
All possible measurements in a quantum system are described by a Hermitian operator orobservable. Due to the spectral theorem, we know that any observable O has a spectral decomposition in the form (for simplicity, we assume a nondegenerated spectrum)
O=iai|aiai|,
(4)
beingaiRthe eigenvalues of the observable and |aitheir corresponding eigenvectors. The probability of obtaining the result aiwhen measuring the property described by observable O in a state |ψis given by
P(ai)=ψ|ai2.
(5)
After the measurement, we obtain the state |aiif the outcome aiwas measured. This is called the postmeasurement state.

This postulate allows us to calculate the possible outputs of a system, the probability of these outcomes, as well as the after-measurement state. A measurement usually changes the state, as it can only remain unchanged if it was already in an eigenstate of the observable.

It is possible to calculate the expectation value of the outcome of a measurement defined by operator O in a state |ψ⟩ by just applying the simple formula,
O=ψ|O|ψ.
(6)
With a little algebra, we can translate this postulate to mixed states. In this case, the probability of obtaining an output ai that corresponds to an eigenvector |ai⟩ is
P(ai)=Tr|aiai|ρ,
(7)
and the expectation value of operator O is
O=TrOρ.
(8)
Example 2: Measurement in a two-level system.
A possible test to perform in our minimal model is to measure the energetic state of a system, assuming that both states have a different energy. The observable corresponding to this measurement would be
H=E0|00|+E1|11|.
(9)
This operator has two eigenvalues E0,E1 with two corresponding eigenvectors |0,|1.

If we have a pure state ψ = a|0⟩ + b|1⟩, the probability of measuring the energy E0 would be P(E0) = |⟨0|ψ⟩|2 = |a|2. The probability of finding E1 would be P(E1) = |⟨1|ψ⟩|2 = |b|2. The expected value of the measurement is ⟨H⟩ = E0|a|2 + E1|b|2.

In the more general case of having a mixed state ρ = ρ00|0⟩⟨0| + ρ01|0⟩⟨1| + ρ10|1⟩⟨0| + ρ11|1⟩⟨1|, the probability of finding the ground state energy is P(0)=Tr|00|ρ=ρ00 and the expected value of the energy would be H=TrHρ=E0ρ00+E1ρ11.End of example.

Another natural question to ask is how quantum systems evolve. The time-evolution of a pure state of a closed quantum system is given by the Schrödinger equation (see Ref. 24, Sec. 2.9).

Postulate 3.
Time evolution of a pure state of a closed quantum system is given by the Schrödinger equation,
ddt|ψ(t)=iH|ψ(t),
(10)
where H is the Hamiltonian of the system and it is a Hermitian operator of the Hilbert space of the system state (from now on we avoid including Planck’s constant by selecting the units such that ℏ = 1).

The Hamiltonian of a system is the operator corresponding to its energy, and it can be nontrivial to realize.

The Schrödinger equation can be formally solved in the following way. If at t = 0, the state of a system is given by |ψ(0)⟩; at time t, it will be
|ψ(t)=eiHt|ψ(0).
(11)
As H is a Hermitian operator, the operator U = eiHt is unitary. This gives us another way of phrasing postulate 3.

Postulate 3′.
The evolution of a closed system is given by a unitary operator of the Hilbert space of the system,
|ψ(t)=U|ψ(0),
(12)
with UBH s.t. UU=UU=1.

It is easy to prove that unitary operators preserve the norm of vectors and, therefore, transform pure states into pure states. As we did with the state of a system, it is reasonable to wonder if any unitary operator corresponds to the evolution of a real physical system. The answer is yes.

Lemma 1.

All unitary evolutions of a state belonging to a finite Hilbert space can be constructed in several physical realizations such as photons and cold atoms.

The proof of this lemma can be found in Ref. 22.

The time evolution of a mixed state can be calculated just by combining Eqs. (10) and (1), giving the von Neumann equation
ρ̇=iH,ρLρ,
(13)
where we have used the commutator A,B=ABBA and L is the so-called Liouvillian superoperator.
It is easy to prove that the Hamiltonian dynamics does not change the purity of a system,
ddtTrρ2=Trdρ2dt=Tr2ρρ̇=2iTrρHρρH=0,
(14)
where we have used the cyclic property of the trace. This result illustrates that the mixing rate of a state does not change due to the quantum evolution.Example 3: Time evolution of a two-level system.
The evolution of our isolated two-level system is described by its Hamiltonian,
Hfree=E0|00|+E1|11|.
(15)
As the states |0⟩ and |1⟩ are Hamiltonian eigenstates, if at t = 0 the atom is at the excited state |ψ(0)⟩ = |1⟩ after a time t, the state would be |ψ(t)=eiHt|1=eiE1t|1.
As the system was already in an eigenvector of the Hamiltonian, its time-evolution consists only in adding a phase to the state, without changing its physical properties. (If an excited state does not change, why do atoms decay?) Without losing any generality, we can fix the energy of the ground state as zero, obtaining
Hfree=E|11|,
(16)
with EE1.
To make the model more interesting, we can include a driving that coherently switches between both states. The total Hamiltonian would be then
H=E|11|+Ω|01|+|10|,
(17)
where Ω is the frequency of driving.

By using the von Neumann equation (13), we can calculate the populations ρ00,ρ11 as a function of time. The system is then driven between the states, and the populations present Rabi oscillations, as it is shown in Fig. 2.End of example.

Finally, as we are interested in composite quantum systems, we need to postulate how to work with them.

FIG. 2.

Population dynamics under a quantum dynamics (parameters are Ω = 1 and E = 1). The blue line represents ρ11 and the orange one ρ00.

FIG. 2.

Population dynamics under a quantum dynamics (parameters are Ω = 1 and E = 1). The blue line represents ρ11 and the orange one ρ00.

Close modal

Postulate 4.

The state-space of a composite physical system, composed by N subsystems, is the tensor product of the state space of each componentH=H1H2HN. The state of the composite physical system is given by a unit vector ofH. Moreover, if each subsystem belonging toHiis prepared in the state |ψi, the total state is given by |ψ⟩ = |ψ1⟩ ⊗ |ψ2⟩ ⊗⋯⊗|ψN.

The symbol ⊗ represents the tensor product of Hilbert spaces, vectors, and operators. If we have a composited mixed state where each component is prepared in the state ρi, the total state is given by ρ = ρ1ρ2 ⊗⋯⊗ ρN.

States that can be expressed in the simple form |ψ⟩ = |ψ1⟩ ⊗ |ψ2⟩, in any specific basis, are very particular and they are called separable states.38 In general, any arbitrary state should be described as |ψ⟩ = ∑i,j|ψi⟩ ⊗ |ψj⟩ (or ρ = ∑i,jρiρj for mixed states). Nonseparable states are called entangled states.

Now, that we know how to compose systems, but we can be interested in going the other way around. If we have a system belonging to a bipartite Hilbert space in the form H=HaHb, we can be interested in studying some properties of the subsystem corresponding to one of the subspaces. To do so, we define the reduced density matrix. If the state of our system is described by a density matrix ρ, the reduced density operator of the subsystem a is defined by the operator,
ρaTrbρ,
(18)
where Trb is the partial trace over subspace b and it is defined as22 
Trbi,j,k,l|aiaj||bkbl|i,j|aiaj|Trk,l|bkbl|.
(19)
The concepts of reduced density matrix and partial trace are essential in the study of open quantum systems. If we want to calculate the equation of motions of a system affected by an environment, we should trace out this environment and deal only with the reduced density matrix of the system. This is the main idea of the theory of open quantum systems.Example 4: Two two-level atoms.
If we have two two-level systems, the total Hilbert space is given by H=H2H2. A basis of this Hilbert space would be given by the set |00|01|02,|01|01|12,|10|11|02,|11|11|12. If both systems are in their ground state, we can describe the total state by the separable vector,
|ψG=|00.
(20)
A more complex, but still separable, state can be formed if both systems are in superposition,
|ψS=12|01+|1112|02+|12=12|00+|10+|01+|11.
(21)
An entangled state would be
|ψE=12|00+|11.
(22)
This state cannot be separated into a direct product of each subsystem. If we want to obtain a reduced description of subsystem 1 (or 2), we have to use the partial trace. To do so, we need first to calculate the density matrix corresponding to the pure state |ψE,
ρE=|ψψ|E=12|0000|+|0011|+|1100|+|1111|.
(23)
We can now calculate the reduced density matrix of the subsystem 1 by using the partial trace,
ρE(1)=0|2ρE|02+1|2ρE|12=12|0000|1+|1111|2.
(24)
From this reduced density matrix, we can calculate all the measurement statistics of subsystem 1.End of example.

In this section, we revise a useful framework for both analytical and numerical calculations. It is clear that some linear combinations of density matrices are valid density matrices (as long as they preserve positivity and trace 1). Because of that, we can create a Hilbert space of density matrices just by defining a scalar product.39 This allows us to define a linear space of matrices, converting the matrices effectively into vectors (ρ → |ρ⟩⟩). This is called Fock-Liouville space (FLS). The usual definition of the scalar product of matrices ϕ and ρ is defined as ϕ|ρTrϕρ. The Liouville superoperator from Eq. (13) is now an operator acting on the Hilbert space of density matrices. The main utility of the FLS is to allow the matrix representation of the evolution operator.Example 5: Time evolution of a two-level atom.

The density matrix of our system (25) can be expressed in the FLS as

|ρ=ρ00ρ01ρ10ρ11.
(25)

The time evolution of a mixed state is given by the von Neumann equation (13). The Liouvillian superoperator can now be expressed as a matrix,

L̃=0iΩiΩ0iΩiE0iΩiΩ0iEiΩ0iΩiΩ0,
(26)

where each row is calculated just by observing the output of the operation iH,ρ in the computational basis of the density matrices space. The time evolution of the system now corresponds to the matrix equation d|ρdt=L̃|ρ, which in matrix notation would be

ρ̇00ρ̇01ρ̇10ρ̇11=0iΩiΩ0iΩiE0iΩiΩ0iEiΩ0iΩiΩ0ρ00ρ01ρ10ρ11.
(27)

End of example.

The problem we want to study is to find the most general Markovian transformation set between density matrices. Until now, we have seen that quantum systems can evolve in two ways by a coherent evolution (postulate 3) and by collapsing after a measurement (postulate 2). Many efforts have been made to unify these two ways of evolving,16 without giving a definite answer so far. It is reasonable to ask what is the most general transformation that can be performed in a quantum system and what is the dynamical equation that describes this transformation.

We are looking for maps that transform density matrices into density matrices. We define ρ(H) as the space of all density matrices in the Hilbert space H. Therefore, we are looking for a map of this space onto itself, V:ρ(H)ρ(H). To ensure that the output of the map is a density matrix, this should fulfill the following properties:

  • Trace preserving. TrVA=TrA, AO(H).

  • Completely positive (see below).

Any map that fulfills these two properties is called a completely positive and trace-preserving map (CPT-maps). The first property is quite apparent, and it does not require more thinking. The second one is a little more complicated, and it requires an intermediate definition.

Definition 1.

A mapVis positive iffAB(H)s.t.A0VA0.

This definition is based on the idea that, as density matrices are positive, any physical map should transform positive matrices into positive matrices. One could naively think that this condition must be sufficient to guarantee the physical validity of a map; it is not. As we know, there exist composite systems, and our density matrix could be the partial trace of a more complicated state. Because of that, we need to impose a more general condition.

Definition 2.

A mapVis completely positive iffnN,V1nis positive.

To prove that not all positive maps are completely positive, we need a counterexample. A canonical example of an operation that is positive but fails to be completely positive is the matrix transposition. If we have a Bell state in the form |ψB=12|01+|10, its density matrix can be expressed as
ρB=12|00||11|+|11||00|+|01||10|+|10||01|,
(28)
with a matrix representation
ρB=1210000001+0001100000100100+01000010.
(29)
A little algebra shows that the full form of this matrix is
ρB=0000011001100000,
(30)
and it is positive.
It is easy to check that the transformation 1T2, meaning that we transpose the matrix of the second subsystem leads to a nonpositive matrix
1T2ρB=1210000100+0001001000100010+00010100.
(31)
The total matrix is
1T2ρB=0001010000101000,
(32)
with −1 as an eigenvalue. This example illustrates how the nonseparability of quantum mechanics restricts the operations we can perform in a subsystem. By imposing these two conditions, we can derive a unique master equation as the generator of any possible Markovian CPT-map.

The most common derivation of the Lindblad master equation is based on open quantum theory. The Lindblad equation is then an effective motion equation for a subsystem that belongs to a more complicated system. This derivation can be found in several textbooks such as Breuer and Petruccione’s2 as well as Gardiner and Zoller’s.1 Here, we follow the derivation presented in Ref. 26. Our initial point is displayed in Fig. 3. A total system belonging to a Hilbert space HT is divided into our system of interest, belonging to a Hilbert space H, and the environment living in HE.

FIG. 3.

A total system (belonging to a Hilbert space HT, with states described by density matrices ρT, and with dynamics determined by a Hamiltonian HT) divided into the system of interest, “system,” and the environment.

FIG. 3.

A total system (belonging to a Hilbert space HT, with states described by density matrices ρT, and with dynamics determined by a Hamiltonian HT) divided into the system of interest, “system,” and the environment.

Close modal

The evolution of the total system is given by the von Neumann equation (13),

ρṪ(t)=iHT,ρT(t).
(33)

As we are interested in the dynamics of the system, without the environment, we trace over the environment degrees of freedom to obtain the reduced density matrix of the system ρ(t) = TrE[ρT]. To separate the effect of the total Hamiltonian in the system and the environment, we divide it in the form HT=HS1E+1SHE+αHI, with HH, HEHE, and HIHT, and being α a measure of the strength of the system-environment interaction. Therefore, we have a part acting on the system, a part acting on the environment, and the interaction term. Without losing any generality, the interaction term can be decomposed in the following way:

HI=iSiEi,
(34)

with SiB(H) and EiB(HE).40 

To better describe the dynamics of the system, it is useful to work in the interaction picture (see Ref. 24 for a detailed explanation about Schrödinger, Heisenberg, and interaction pictures). In the interaction picture, density matrices evolve with time due to the interaction Hamiltonian, while operators evolve with the system and environment Hamiltonian. An arbitrary operator OB(HT) is represented in this picture by the time-dependent operator Ô(t), and its time evolution is

Ô(t)=ei(H+HE)tOei(H+HE)t.
(35)

The time evolution of the total density matrix is given in this picture by

dρ^T(t)dt=iαĤI(t),ρ^T(t).
(36)

This equation can be easily integrated to give

ρ^T(t)=ρ^T(0)iα0tdsĤI(s),ρ^T(s).
(37)

By this formula, we can obtain the exact solution, but it still has the complication of calculating an integral in the total Hilbert space. It is also a troublesome fact that the state ρ̃(t) depends on the integration of the density matrix in all previous time. To avoid that, we can introduce Eq. (37) into Eq. (36) giving

dρ^T(t)dt=iαĤI(t),ρ^T(0)α20tdsĤI(t),ĤI(s),ρ^T(s).
(38)

By applying this method one more time, we obtain

dρ^T(t)dt=iαĤI(t),ρ^T(0)α20tdsĤI(t),ĤI(s),ρ^T(t)+O(α3).
(39)

After this substitution, the integration of the previous states of the system is included only in the terms that are O(α3) or higher. At this moment, we perform our first approximation by considering that the strength of the interaction between the system and the environment is small. Therefore, we can avoid high-orders in Eq. (39). Under this approximation, we have

dρ^T(t)dt=iαĤI(t),ρ^T(0)α20tdsĤI(t),ĤI(s),ρ^T(t).
(40)

We are interested in finding an equation of motion for ρ, so we trace over the environment degrees of freedom,

dρ^(t)dt=TrEdρ^T(t)dt=iαTrEĤI(t),ρ^T(0)α20tdsTrEĤI(t),ĤI(s),ρ^T(t).
(41)

This is not a closed time-evolution equation for ρ^(t) because the time derivative still depends on the full density matrix ρ^T(t). To proceed, we need to make two more assumptions. First, we assume that at t = 0 the system and the environment have a separable state in the form ρT(0) = ρ(0) ⊗ ρE(0). This means that there are not correlations between the system and the environment. This may be the case if the system and the environment have not interacted at previous times or if the correlations between them are short-lived. Second, we assume that the initial state of the environment is thermal, meaning that it is described by a density matrix in the form ρE(0)=expHE/T/Tr[expHE/T], being T the temperature and taking the Boltzmann constant as kB = 1. By using these assumptions, and the expansion of HI(34), we can calculate an expression for the first element of the rhs of Eq. (41),

TrEĤI(t),ρ^T(0)=iŜi(t)ρ^(0)TrEÊi(t)ρ^E(0)ρ^(0)Ŝi(t)TrEρ^E(0)Êi(t).
(42)

To calculate the explicit value of this term, we may use that Ei=Tr[EiρE(0)]=0 for all values of i. This looks like a strong assumption, but it is not. If our total Hamiltonian does not fulfill it, we can always rewrite it as HT=H+αiEiSi+HE+αHi, with Hi=iSi(EiEi). It is clear that now Ei=0, with Ei=EiEi, and the system Hamiltonian is changed just by the addition of an energy shift that does not affect the system dynamics. Because of that, we can assume that Ei=0 for all i. Using the cyclic property of the trace, it is easy to prove that the term of Eq. (42) is equal to zero and the equation of motion (41) reduces to

ρ^̇(t)=α20tdsTrEĤI(t),ĤI(s),ρ^T(t).
(43)

This equation still includes the entire state of the system and environment. To unravel the system from the environment, we have to make a more restrictive assumption. As we are working in the weak coupling regime, we may suppose that the system and the environment are noncorrelated during all the time evolution. Of course, this is only an approximation. Due to the interaction Hamiltonian, some correlations between system and environment are expected to appear. On the other hand, we may assume that the time scales of correlation (τcorr) and relaxation of the environment (τrel) are much smaller than the typical system time scale (τsys), as the coupling strength is very small (α≪). Therefore, under this strong assumption, we can assume that the environment state is always thermal and is decoupled from the system state, ρ^T(t)=ρ^(t)ρ^E(0). Equation (43) then transforms into

ρ^̇(t)=α20tdsTrEĤI(t),ĤI(s),ρ^(t)ρ^E(0).
(44)

The equation of motion is now independent for the system and local in time. It is still non-Markovian, as it depends on the initial state preparation of the system. We can obtain a Markovian equation by realizing that the kernel in the integration decays fast enough and that we can extend the upper limit of the integration to infinity with no real change in the outcome. By doing so, and by changing the integral variable to sts, we obtain the famous Redfield equation41 

ρ^̇(t)=α20dsTrEĤI(t),ĤI(st),ρ^(t)ρ^E(0).
(45)

It is known that this equation does not warrant the positivity of the map, and it sometimes gives rise to density matrices that are nonpositive. To ensure complete positivity, we need to perform one further approximation, the rotating wave approximation. To do so, we need to use the spectrum of the superoperator H̃AH,A, AB(H). The eigenvectors of this superoperator form a complete basis of space B(H), and, therefore, we can expand the system-environment operators from Eq. (34) in this basis,

Si=ωSi(ω),
(46)

where the operators Si(ω) fulfill

H,Si(ω)=ωSi(ω),
(47)

being ω the eigenvalues of H̃. It is easy to take also the Hermitian conjugated

H,Si(ω)=ωSi(ω).
(48)

To apply this decomposition, we need to change back to the Schrödinger picture for the term of the interaction Hamiltonian acting on the system’s Hilbert space. This is done by the expression Ŝk = eitHSkeitH. By using the eigenexpansion (46), we arrive to

H̃i(t)=k,ωeiωtSk(ω)Ẽk(t)=k,ωeiωtSk(ω)Ẽk(t).
(49)

To combine this decomposition with Redfield equation (45), we first may expand the commutators,

ρ^̇(t)=α2Tr0dsĤI(t)ĤI(ts)ρ^(t)ρ^E(0)0dsĤI(t)ρ^(t)ρ^E(0)ĤI(ts)0dsĤI(ts)ρ^(t)ρ^E(0)ĤI(t)+0dsρ^(t)ρ^E(0)ĤI(ts)ĤI(t).
(50)

We now apply the eigenvalue decomposition in terms of Sk(ω) for ĤI(ts) and in terms of Sk(ω) for ĤI(t). By using the permutation property of the trace and the fact that HE,ρE(0)=0, and after some nontrivial algebra, we obtain

ρ^̇(t)=ω,ωk,lei(ωω)tΓkl(ω)Sl(ω)ρ^(t),Sk(ω)+ei(ωω)tΓlk*(ω)Sl(ω),ρ^(t)Sk(ω),
(51)

where the effect of the environment has been absorbed into the factors,

Γkl(ω)0dseiωsTrEẼk(t)Ẽl(ts)ρE(0),
(52)

where we are writing the environment operators of the interaction Hamiltonian in the interaction picture (Êl(t)=eiHEtEleiHEt). At this point, we can already perform the rotating wave approximation. By considering the time-dependency on Eq. (51), we conclude that the terms with ωωα2 will oscillate much faster than the typical time scale of the system evolution. Therefore, they do not contribute to the evolution of the system. In the low-coupling regime (α → 0), we can consider that only the resonant terms, ω = ω′, contribute to the dynamics and remove all the others. By applying this approximation to Eq. (51) reduces to

ρ^̇(t)=ωk,lΓkl(ω)Sl(ω)ρ^(t),Sk(ω)+Γlk*(ω)Sl(ω),ρ^(t)Sk(ω).
(53)

To divide the dynamics into Hamiltonian and non-Hamiltonian, we now decompose the operators Γkl into Hermitian and non-Hermitian parts, Γkl(ω)=12γkl(ω)+iπkl, with

πkl(ω)i2Γkl(ω)Γkl*(ω),γkl(ω)Γkl(ω)+Γkl*(ω)=dseiωsTrÊk(s)Elρ^E(0).
(54)

By these definitions, we can separate the Hermitian and non-Hermitian parts of the dynamics and we can transform back to the Schrödinger picture,

ρ̇(t)=iH+HLs,ρ(t)+ωk,lγkl(ω)×Sl(ω)ρ(t)Sk(ω)12SkSl(ω),ρ(t).
(55)

The Hamiltonian dynamics now is influenced by a term HLs=ω,k,lπkl(ω)Sk(ω)Sl(ω). This is usually called a Lamb shift Hamiltonian, and its role is to renormalize the system energy levels due to the interaction with the environment. Equation (55) is the first version of the Markovian master equation, but it is not in the Lindblad form yet.

It can be easily proved that the matrix formed by the coefficients γkl(ω) is positive as they are the Fourier’s transform of a positive function TrÊk(s)Elρ^E(0). Therefore, this matrix can be diagonalized. This means that we can find a unitary operator, O, s.t.

Oγ(ω)O=d1(ω)000d2(ω)0000dN(ω).
(56)

We can now write the master equation in a diagonal form,

ρ̇(t)=iH+HLs,ρ(t)+i,ωLi(ω)ρ(t)Li(ω)12LiLi(ω),ρ(t)Lρ(t).
(57)

This is the celebrated Lindblad (or Lindblad-Gorini-Kossakowski-Sudarshan) master equation. In the simplest case, there will be only one relevant frequency ω and the equation can be further simplified to

ρ̇(t)=iH+HLs,ρ(t)+iLiρ(t)Li12LiLi,ρ(t)Lρ(t).
(58)

The operators Li are usually referred to as jump operators.

The second way of deriving the Lindblad equation comes from the following question: What is the most general (Markovian) way of mapping density matrix onto density matrices? This is usually the approach from quantum information researchers that look for general transformations of quantum systems. We analyze this problem following mainly Ref. 28.

To start, we need to know what is the form of a general CPT-map.

Lemma 2.

Any mapV:BHBHthat can be written in the formVρ=VρVwithVBHis positive.

The proof of the lemma requires a little algebra and a known property of normal matrices.

Proof.

If ρ0ρ=AA, with AB(H). Therefore, Vρ=VρVψ|VρV|ψ=ψ|VAAV|ψ=||AV|ψ||0. Therefore, if ρ is positive, the output of the map is also positive.End of the proof.

This is a sufficient condition for the positivity of a map, but it is not necessary. It could happen that there are maps that cannot be written in this form, but they are still positive. To go further, we need a more general condition and this comes in the form of the following theorem:

Theorem 1.

Choi’s theorem.

A linear mapV:B(H)B(H)is completely positive iff it can be expressed as
Vρ=iViρVi,
(59)
withViB(H).

The proof of this theorem requires some algebra.

Proof.

The “if” implication is a trivial consequence of Lemma 2. To prove the converse, we need to extend the dimension of our system by the use of an auxiliary system. If d is the dimension of the Hilbert space of pure states, H, we define a new Hilbert space of the same dimension HA.

We define a maximally entangled pure state in the bipartition HAH in the way,
|Γi=0d1|iA|i,
(60)
being |i and |iA arbitrary orthonormal bases for H and HA, respectively.
We can extend the action of our original map V that acts on B(H) to our extended Hilbert space by defining the map V2:B(HA)B(H)B(HA)B(H) as
V21B(HA)V.
(61)
Note that the idea behind this map is to leave the auxiliary subsystem invariant while applying the original map to the original system. This map is positive because V is completely positive. This may appear trivial, but as it has been explained before complete positivity is a more restrictive property than positivity, and we are looking for a condition to ensure complete positivity.
We can now apply the extended map to the density matrix corresponding to the maximally entangled state (60), obtaining
V2|ΓΓ|=i,j=0d1|ij|V|ij|.
(62)
Now, we can use the maximal entanglement of the state |Γ⟩ to relate the original map V and the action V2|ΓΓ| by taking the matrix elements with respect to HA,
V|ij|=i|AV2|ΓΓ||jA.
(63)
To relate this operation to the action of the map to an arbitrary vector |ψHAH, we can expand it in this basis as
|ψ=i=0d1j=0d1αij|iA|j.
(64)
We can also define an operator V|ψBH s.t. it transforms |Γ⟩ into |ψ⟩. Its explicit action would be written as
1AV|ψ|Γ=i,j=0d1αij1A|ji|k=0d1|k|k=i,j,k=0d1αij|k|ji|k=i,j,k=0dqαij|k|jδi,k=i,j=0d1αij|i|j=|ψ.
(65)
At this point, we have related the vectors in the extended space HAH to operators acting on H. This can only be done because the vector |Γ⟩ is maximally entangled. We go now back to our extended map V2. Its action on |Γ⟩⟨Γ| is given by Eq. (62), and as it is a positive map, it can be expanded as
V2|ΓΓ|=l=0d21|vlvl|,
(66)
with |vlHAH. The vectors |vl⟩ can be related to operators in H as in Eq. (65),
|vl=1AVl|Γ.
(67)
Based on this result, we can calculate the product of an arbitrary vector |iAHA, with |vl⟩,
i|A|vl=i|A1AVl|Γ=Vlk=0d1i|kA|k.
(68)
This is the last ingredient we need for the proof.
We come back to the original question, we want to characterize the map V. We do so by applying it to an arbitrary basis element |i⟩⟨j| of BH,
V|ij|=i|A1AV2|ΓΓ||jA1A=i|A1Al=0d21|vlvl||jA1A=l=0d21i|A1A|vlvl||jA1A=l=0d21Vl|ij|Vl.
(69)
As |i⟩⟨j| is an arbitrary element of a basis, any operator can be expanded in this basis. Therefore, it is straightforward to prove that
Vρ=ld2lVlρVl.
End of the proof.

Thanks to Choi’s theorem, we know the general form of CP-maps, but there is still an issue to address. As density matrices should have trace one, we need to require any physical maps to be also trace-preserving. This requirement gives as a new constraint that completely defines all CPT-maps. This requirement comes from the following theorem:

Theorem 2.

Choi-Kraus’ theorem.

A linear mapV:B(H)B(H)is completely positive and trace-preserving iff it can be expressed as
Vρ=lVlρVl,
(70)
withVlB(H)fulfilling
lVlVl=1H.
(71)

Proof.
We have already proved that this is a completely positive map and we only need to prove that it is also trace-preserving and that all trace preserving-maps fulfill Eq. (71). The “if” proof is quite simple by applying the cyclic permutations and linearity properties of the trace operator,
TrVρ=Trl=1d21VlρVl=Trl=1d21VlVlρ=Trρ.
(72)
We have to prove also that any map in the form (70) is trace-preserving only if the operators Vl fulfill (71). We start by stating that if the map is trace-preserving by applying it to an any arbitrary element of a basis of BH, we should obtain
TrV|ij|=Tr|ij|=δi,j.
(73)
As the map has a form given by (70), we can calculate this same trace in an alternative way,
TrV|ij|=Trl=1d21Vl|ij|Vl=Trl=1d21VlVl|ij|=kk|l=1d21VlVl|ij||k=j|l=1d21VlVl|i,
(74)
where |k is an arbitrary basis of H. As both equalities should be right, we obtain
j|l=1d21VlVl|i=δi,j,
(75)
and therefore, condition (71) should be fulfilled.End of the proof.

Operators Vi of a map fulfilling condition (71) are called Krauss operators. Because of that, sometimes CPT-maps are also called Krauss maps, especially when they are presented as a collection of Krauss operators. Both concepts are ubiquitous in quantum information science. Krauss operators can also be time-dependent as long as they fulfill relation (71) for all times.

At this point, we already know the form of CPT-maps, but we do not have a master equation that is a continuous set of differential equations. This means that we know how to perform an arbitrary operation in a system, but we do not have an equation to describe its time evolution. To do so, we need to find a time-independent generator L such that
ddtρt=Lρ(t),
(76)
and therefore our CPT-map could be expressed as V(t)=eLt. The following calculation is about founding the explicit expression of L. We start by choosing an orthonormal basis of the bounded space of operators B(H), Fii=1d2. To be orthonormal, it should satisfy the following condition:
Fi|FjTrFiFj=δi,j.
(77)
Without any loss of generality, we select one of the elements of the basis to be proportional to the identity, Fd2=1d1H. It is trivial to prove that the norm of this element is one, and it is easy to see from Eq. (77) that all the other elements of the basis should have trace zero,
TrFi=0  i=1,,d21.
(78)
The closure relation of this basis is 1B(H)=i|FiFi|. Therefore, the Krauss operators can be expanded in this basis by using the Fock-Liouville notation,
Vl(t)=i=1d2Fi|Vl(t)|Fi.
(79)
As the map V(t) is in the form (59), we can apply (79) to obtain (For simplicity, in this discussion, we omit the explicit time-dependency of the density matrix.)
V(t)ρ=li=1d2Fi|Vl(t)Fiρj=1d2FjVl(t)|Fj=i,j=1d2ci,j(t)FiρFj,    
(80)
where we have absorbed the summation over the Krauss operators in the terms ci,j(t) = ∑l⟨⟨Fi|Vl⟩⟩⟨⟨Vl|Fj⟩⟩. We go back now to the original problem by applying this expansion into the time-derivative of Eq. (76),
dρdt=limΔt01ΔtV(Δt)ρρ=limΔt0i,j=1d2ci,j(Δt)FiρFjρ=limΔt0i,j=0d21ci,j(Δt)FiρFj+i=1d21ci,d2FiρFd2+j=1d21cd2,j(Δt)Fd2ρFj+cd2,d2(Δt)Fd2ρFd2ρ,
(81)
where we have separated the summations to take into account that Fd2=1d1H. By using this property, this equation simplifies to
dρdt=limΔt01Δti,j=1d21ci,j(Δt)FiρFj+1di=1d21ci,d2(Δt)Fiρ+1dj=1d21cd2,j(Δt)ρFj+1dcd2,d2(Δt)ρρ.
(82)
The next step is to eliminate the explicit dependence with time. To do so, we define new constants to absorb all the time intervals,
gi,jlimΔt0ci,j(Δt)Δt  (i,j<d2),gi,d2limΔt0ci,d2(Δt)Δt  (i<d2),gd2,jlimΔt0cd2,j(Δt)Δt  (j<d2),gd2,d2limΔt0cd2,d2(Δt)dΔt.
(83)
Introducing these coefficients in Eq. (82), we obtain an equation with no explicit dependence in time,
dρdt=i,j=1d21gi,jFiρFj+1di=1d21gi,d2Fiρ+1dj=1d21gd2,jρFj+gd2,d2dρ.
(84)
As we are already summing up over all the Krauss operators, it is useful to define a new operator,
F1di=1d21gi,d2Fi.
(85)
Applying it to Eq. (82),
dρdt=i,j=1d21gi,jFiρFj+Fρ+ρF+gd2,d2dρ.
(86)
At this point, we want to separate the dynamics of the density matrix into a Hermitian (equivalent to the von Neumann equation) and an incoherent part. We split the operator F into two to obtain a Hermitian and anti-Hermitian part,
F=F+F2+iFF2iGiH,
(87)
where we have used the notation H for the Hermitian part for obvious reasons. If we take this definition to Eq. (86), we obtain
dρdt=i,j=1d21gi,jFiρFj+G,ρiH,ρ+gd2,d2dρ.
(88)
We now define the last operator for this proof, G2G+gd2,d22d, and the expression of the time derivative leads to
dρdt=i,j=1d21gi,jFiρFj+G2,ρiH,ρ.
(89)
Until now, we have imposed the complete positivity of the map, as we have required it to be written in terms of Krauss maps, but we have not used the trace-preserving property. We impose now this property, and by using the cyclic property of the trace, we obtain a new condition,
Trdρdt=Tri,j=1d21FjFiρ+2G2ρ=0.
(90)
Therefore, G2 should fulfill
G2=12i,j=1d21gi,jFjFiρ.
(91)
By applying this condition, we arrive at the Lindblad master equation,
dρdt=iH,ρ+i,j=1d21gi,jFiρFj12FjFi,ρ.
(92)
Finally, by definition of the coefficients, gi,j can be arranged to form a Hermitian, and therefore diagonalizable, matrix. By diagonalizing it, we obtain the diagonal form of the Lindblad master equation,
ddtρ=iH,ρ+kΓkLkρLk12LkLk,ρLρ.
(93)

Some interesting properties of the Lindblad equation are the following:

  • Under a Lindblad dynamics, if all the jump operators are Hermitian, the purity of a system fulfills ddtTrρ20. The proof is given in A.

  • The Lindblad master equation is invariant under unitary transformations of the jump operators,

ΓiLiΓiLi=jvijΓjLj,
(94)

with v representing a unitary matrix. It is also invariant under inhomogeneous transformations in the form

LiLi=Li+ai,HH=H+12ijΓjaj*AjajAj+b,
(95)

where aiC and bR. The proof of this can be found in Ref. 2 (Sec. III).

  • Thanks to the previous properties, it is possible to find traceless jump operators without loss of generality.

Example 6: A master equation for a two-level system with decay.

Continuing our example of a two-level atom, we can make it more realistic by including the possibility of atom decay by the emission of a photon. This emission happens due to the interaction of the atom with the surrounding vacuum state. (This is why atoms decay.) The complete quantum system would be in this case the “atom + vacuum” system, and its time evolution should be given by the von Neumann equation (13), where H represents the total “atom + vacuum” Hamiltonian. This system belongs to an infinite-dimension Hilbert space, as the radiation field has infinite modes. If we are interested only in the time dependence of the state of the atom, we can derive a Markovian master equation for the reduced density matrix of the atom (see for instance, Refs. 1 and 2). The master equation we will study is

ddtρ(t)=iH,ρ+Γσρσ+12σ+σ,ρ,
(96)

where Γ is the coupling between the atom and the vacuum.

In the Fock-Liouvillian space [following the same ordering as in Eq. (25)], the Liouvillian corresponding to evolution (96) is

L=0iΩiΩΓiΩiEΓ20iΩiΩ0iEΓ2iΩ0iΩiΩΓ.
(97)

Expressing explicitly the set of differential equations, we obtain

ρ̇00=iΩρ01iΩρ10+Γρ11,ρ̇01=iΩρ00iEΓ2ρ01iΩρ11,ρ̇10=iΩρ00iEΓ2ρ10+iΩρ11,ρ̇10=iΩρ01+iΩρ10Γρ11.
(98)

End of example.

To calculate the time evolution of a system determined by a master equation in the form (96), we need to solve a set of equations with as many equations as the dimension of the density matrix. In our example, this means to solve a 4 variable set of equations, but the dimension of the problem increases exponentially with the system size. Because of this, for bigger systems, techniques for dimension reduction are required.

To solve systems of partial differential equations, there are several canonical algorithms. This can be done analytically only for a few simple systems and by using sophisticated techniques as damping bases.29 In most cases, we have to rely on numerical approximated methods. One of the most popular approaches is the 4th-order Runge-Kutta algorithm (see, for instance, Ref. 30, for an explanation of the algorithm). By integrating the equations of motion, we can calculate the density matrix at any time t.

The steady-state of a system can be obtained by evolving it for a long time t. Unfortunately, this method presents two difficulties. First, if the dimension of the system is big, the computing time would be huge. This means that for systems beyond a few qubits, it will take too long to reach the steady-state. Even worse is the problem of stability of the algorithms for integrating differential equations. Due to small errors in the calculation of derivatives by the use of finite differences, the trace of the density matrix may not be constantly equal to one. This error accumulates during the propagation of the state, giving nonphysical results after a finite time. One solution to this problem is the use of algorithms specifically designed to preserve the trace, as the Crank-Nicholson algorithm.31 The problem with this kind of algorithm is that they consume more computational power than Runge-Kutta, and therefore they are not useful to calculate the long-time behavior of big systems. An analysis of different methods and their advantages and disadvantages can be found in Ref. 32.Example 7: Time dependency of the two-level system with decay.

In this section, we show some results of solving Eq. (96) and calculating the density matrix as a function of time. A Mathematica notebook solving this problem can be found in Ref. 20. To illustrate the time behavior of this system, we calculate the evolution for different state parameters. In all cases, we start with an initial state that represents the state being excited ρ11 = 1, with no coherence between different states, meaning ρ01 = ρ10 = 0. If the decay parameter Γ is equal to zero, the problem reduces to solve the von Neumann equation, and the result is displayed in Fig. 2. The other extreme case would be a system with no coherent dynamics (Ω = 0) but with decay. In this case, as shown in Fig. 4, we observe an exponential decay of the population of the excited state.

FIG. 4.

Population dynamics under a pure incoherent dynamics (Γ = 0.1, n = 1, Ω = 0, and E = 1). The blue line represents ρ11 and the orange one ρ00.

FIG. 4.

Population dynamics under a pure incoherent dynamics (Γ = 0.1, n = 1, Ω = 0, and E = 1). The blue line represents ρ11 and the orange one ρ00.

Close modal

Finally, we can calculate the dynamics of a system with both coherent driving and decay. In this case, both behaviors coexist and there are oscillations and decay. This is displayed in Fig. 5.End of example.

FIG. 5.

Population dynamics under a pure incoherent dynamics (Γ = 0.2, n = 1, Ω = 0, and E = 1). The blue line represents ρ11 and the orange one ρ00.

FIG. 5.

Population dynamics under a pure incoherent dynamics (Γ = 0.2, n = 1, Ω = 0, and E = 1). The blue line represents ρ11 and the orange one ρ00.

Close modal

As we have discussed before, in the Fock-Liouville space, the Liouvillian corresponds to a matrix (in general complex, non-Hermitian, and nonsymmetric). By diagonalizing it, we can calculate both the time-dependent and the steady-state of the density matrices. For most purposes, in the short time regime, integrating the differential equations may be more efficient than diagonalizing. This is due to the high dimensionality of the Liouvillian that makes the diagonalization process very costly in computing power. On the other hand, in order to calculate the steady-state, the diagonalization is the mostly used method due to the problems of integrating the equation of motions discussed in Sec. VI A.

Let us see first how we use diagonalization to calculate the time evolution of a system. As the Liouvillian matrix is non-Hermitian, we cannot apply the spectral theorem to it and it may have different left and right eigenvectors. For a specific eigenvalue Λi, we can obtain the eigenvectors |ΛiR and |ΛiL s.t.

L̃|ΛiR=Λi|ΛiR,ΛiL|L̃=ΛiΛiL|.
(99)

An arbitrary system can be expanded in the eigenbasis of L̃ as1,33

|ρ(0)=i|ΛiRΛiL|ρ(0).
(100)

Therefore, the state of the system at a time t can be calculated in the form

|ρ(t)=ieΛit|ΛiRΛiL|ρ(0).
(101)

Note that in this case to calculate the state a time t we do not need to integrate into the interval 0,t, as we have to do if we use a numerical solution of the differential set of equations. This is an advantage when we want to calculate long-time behavior. Furthermore, to calculate the steady-state of a system, we can look to the eigenvector that has zero eigenvalue, as this is the only one that survives when t → ∞.

For any finite system, Evans’ theorem ensures the existence of at least one zero eigenvalue of the Liouvillian matrix.34,35 The eigenvector corresponding to this zero eigenvalue would be the steady-state of the system. In exceptional cases, a Liouvillian can present more than one zero eigenvalues due to the presence of symmetry in the system.26,27,36 This is a nongeneric case, and for most purposes, we can assume the existence of a unique fixed point in the dynamics of the system. Therefore, diagonalizing can be used to calculate the steady-state without calculating the full evolution of the system. This can be done even analytically for small systems, and when numerical approaches are required, this technique gives better precision than integrating the equations of motion. The spectrum of Liouvillian superoperators has been analyzed in several recent papers.33,37Example 8: Spectrum-analysis of the Liouvillian for the two-level system with decay.

Here, we diagonalize (97) and obtain its steady state. A Mathematica notebook solving this problem can be downloaded from Ref. 20. This specific case is straightforward to diagonalize as the dimension of the system is very low. We obtain 4 different eigenvalues, two of them are real while the other two form a conjugated pair. Figure 6 displays the spectrum of the superoperator L given in (97).

FIG. 6.

Spectrum of the Liouvillian matrix given by (97) for the general case of both coherent and incoherent dynamics (Γ = 0.2, n = 1, Ω = 0, E = 1).

FIG. 6.

Spectrum of the Liouvillian matrix given by (97) for the general case of both coherent and incoherent dynamics (Γ = 0.2, n = 1, Ω = 0, E = 1).

Close modal

As there is only one zero eigenvalue, we can conclude that there is only one steady-state, and any initial density matrix will evolve to it after an infinite-time evolution. By selecting the right eigenvector corresponding to the zero-eigenvalue and normalizing it we obtain the density matrix. This can be done even analytically. The result is the matrix,

ρSS=(1+n)4E2+(Γ+2nΓ)2+4(1+2n)Ω2(1+2n)4E2+(Γ+2nΓ)2+8Ω22(2Ei(Γ+2nΓ))Ω(1+2n)4E2+(Γ+2nΓ)2+8Ω22(2E+i(Γ+2nΓ))Ω(1+2n)4E2+(Γ+2nΓ)2+8Ω2n4E2+(Γ+2nΓ)2+4(1+2n)Ω2(1+2n)4E2+(Γ+2nΓ)2+8Ω2.
(102)

End of example.

The author wants to acknowledge the Spanish Ministry and the Agencia Española de Investigación (AEI) for financial support under Grant No. FIS2017-84256-P (FEDER funds).

In this Appendix, we proof that under the Lindblad dynamics given by Eq. (93), the purity of a density matrix fulfills that ddtTrρ20 if all the jump operators of the Lindblad dynamics are Hermitian.

We start just by interchanging the trace and the derivative. As the trace is a linear operation, it commutes with the derivation, and we have

ddtTrρ2=Trdρ2dt=Tr2ρρ̇,
(A1)

where we have used the cyclic property of the trace operator. (This property is used along all the demonstration without explicitly mentioning it.) By inserting the Lindblad Eq. (93) into the rhs of (A1), we obtain

ddtTrρ2=iTr2ρHρρH+2kΓkTrρLkρLk2kΓkTrρ2LkLk.
(A2)

The first term is zero. Therefore, the inequality we want to prove becomes equivalent to

kΓkTrρLkρLkkΓkTrρ2LkLk.
(A3)

As the density matrix is Hermitian, we can diagonalize it to obtain its eigenvalues (ΛiR) and its corresponding eigenvectors (|Λi⟩). The density matrix is diagonal in its own eigenbasis and can be expressed as (this eigenbasis changes with time, of course, but the proof is valid as the inequality should be fulfilled at any time)

ρρ̃=iΛi|ΛiΛi|,
(A4)

where we assume an ordering of the eigenvalues in the form Λ0 ≥ Λ1 ≥ ⋯ ≥Λd.

We rename the jump operators in this basis as L̃i a. Expanding each term of the inequality (A3) in this basis, we obtain

kΓkTrρLkρLk=kΓkTriΛi|ΛiΛi|L̃kjΛj|ΛjΛj|L̃k=kΓki,jΛiΛjTrL̃k|ΛiΛi|L̃k|ΛjΛj|=kΓki,jΛiΛjTrΛi|L̃k|Λj2=kΓki,jΛiΛjxij(k),
(A5)

where we have introduced the coefficients xij(k)Λi|L̃k|Λj2. As the operators Lk are Hermitian, these coefficients fulfill xij(k)=xji(k).

The second term is expanded as

kΓkTrρ2LkLk=kΓkTriΛi|ΛiΛi|jΛj|ΛjΛj|L̃kL̃k=kΓkijΛiΛjTrL̃k|ΛiΛj|L̃kΛi|Λj=kΓkiΛi2TrL̃k|ΛiΛi|L̃k=kΓkiΛi2TrL̃k|ΛiΛi|L̃kj|ΛjΛj|=kΓkijΛi2TrΛj|L̃k|Λi+Λi|L̃k|Λj=kΓkijΛi2xij,
(A6)

where we have used the closure relation in the density matrix eigenbasis, 1H=j|ΛjΛj|. The inequality can now be written as

kΓkijΛiΛjxijkΓkijΛi2xij.
(A7)

As xij = xji, we can reorder the ij sum in the following way:

kΓkiji2ΛiΛjxij(k)+Λi2xii(k)   kΓkij<iΛi2+Λj2xij(k)+Λi2xii(k).
(A8)

Therefore, we can reduce the proof of this inequality to the proof of a set of inequalities,

2ΛiΛjxij(k)Λi2+Λj2xij(k),  k,i,j.
(A9)

It is obvious that (A9)(A8) (but not the other way around). The inequalities (A9) are easily proved just by taking into account that xij(k)0 and applying the triangular inequality.

1.
C. W.
Gardiner
and
P.
Zoller
,
Quantum Noise
(
Springer
,
Berlin
,
2000
).
2.
H. P.
Breuer
and
F.
Petruccione
,
The Theory of Open Quantum Systems
(
Oxford University Press
,
2002
).
3.
A.
Rivas
and
S.
Huelga
,
Open Quantum Systems. An Introduction
(
Springer
,
New York
,
2012
).
4.
G.
Lindblad
, “
On the generators of quantum dynamical semigroups
,”
Commun. Math. Phys.
119
,
48
(
1976
).
5.
V.
Gorini
,
A.
Kossakowski
, and
E. C.
Sudarsahan
, “
Completely positive semigroups of n-level systems
,”
J. Math. Phys.
17
,
821
(
1976
).
6.
D.
Manzano
and
E.
Kyoseva
, “
An atomic symmetry-controlled thermal switch
,”
Sci. Rep.
6
,
31161
(
2016
).
7.
T.
Prosen
, “
Open XXZ spin chain: Nonequilibrium steady state and a strict bound on ballistic transport
,”
Phys. Rev. Lett.
106
,
217206
(
2011
).
8.
D.
Manzano
,
M.
Tiersch
,
A.
Asadian
, and
H. J.
Briegel
, “
Quantum transport efficiency and Fourier’s law
,”
Phys. Rev. E
86
,
061118
(
2012
).
9.
D.
Manzano
,
C.
Chuang
, and
J.
Cao
, “
Quantum transport in d-dimensional lattices
,”
New J. Phys.
18
,
043044
(
2015
).
10.
B.
Olmos
,
I.
Lesanovsky
, and
J. P.
Garrahan
, “
Facilitated spin models of dissipative quantum glasses
,”
Phys. Rev. Lett.
109
,
020403
(
2012
).
11.
J.
Metz
,
M.
Trupke
, and
A.
Beige
, “
Robust entanglement through macroscopic quantum jumps
,”
Phys. Rev. Lett.
97
,
040503
(
2006
).
12.
R.
Jones
,
J. A.
Needham
,
I.
Lesanovsky
, and
F.
Intravaia
, “
Beatriz Olmos modified dipole-dipole interaction and dissipation in an atomic ensemble near surfaces
,”
Phys. Rev. A
97
,
053841
(
2018
).
13.
D. A.
Lidar
,
I. L.
Chuang
, and
K. B.
Whaley
, “
Decoherence-free subspaces for quantum computation
,”
Phys. Rev. Lett.
81
(
12
),
2594
(
1998
).
14.
B.
Kraus
,
H. P.
Büchler
,
S.
Diehl
,
A.
Kantian
,
A.
Micheli
, and
P.
Zoller
, “
Preparation of entangled states by quantum Markov processes
,”
Phys. Rev. A
78
,
042307
(
2008
).
15.
T. A.
Brun
, “
Continuous measurements, quantum trajectories, and decoherent histories
,”
Phys. Rev. A
61
,
042107
(
2000
).
16.
M.
Schlosshauer
,
Decoherence and the Quantum-to-Classical Transition
(
Springer
,
New York
,
2007
).
17.
M.
Plenio
and
S.
Huelga
, “
Dephasing-assisted transport: Quantum networks and biomolecules
,”
New J. Phys.
10
,
113019
(
2008
).
18.
M.
Mohseni
,
P.
Rebentrost
,
S.
Lloyd
, and
A.
Aspuru-Guzik
, “
Enviroment-assisted quantum walks in photosynthetic energy transfer
,”
J. Chem. Phys.
129
,
174106
(
2008
).
19.
D.
Manzano
, “
Quantum transport in quantum networks and photosynthetic complexes at the steady state
,”
PLoS One
8
(
2
),
e57041
(
2013
).
20.
See https://ic1.ugr.es/manzano/Descargas/Lindblad/Lindblad_Manzano.zip for downloading the notebooks used to generate all the data of this paper.
21.
L.
Debnath
and
P.
Mikusińki
,
Introduction to Hilbert Spaces with Applications
(
Elsevier Academic Press
,
2005
).
22.
M. A.
Nielsen
and
I. L.
Chuang
,
Quantum Computation and Quantum Information
(
Cambridge University Press
,
Cambridge
,
2000
).
23.
J. J.
Sakurai
,
Modern Quantum Mechanics
(
Addison-Wesley Publishing Co.
,
1994
).
24.
A.
Galindo
and
P.
Pascual
,
Quantum Mechanics I
(
Springer
,
Berlin
,
1990
).
25.
A.
Peres
,
Quantum Theory: Concepts and Methods
(
Kluwer Academic Publishers
,
1995
).
26.
D.
Manzano
and
P. I.
Hurtado
, “
Harnessing symmetry to control quantum transport
,”
Adv. Phys.
67
(
1
),
1
(
2018
).
27.
D.
Manzano
and
P. I.
Hurtado
, “
Symmetry and the thermodynamics of currents in open quantum systems
,”
Phys. Rev. B
90
,
125138
(
2014
).
28.
M. M.
Wilde
,
Quantum Information Theory
(
Cambridge University Press
,
Cambridge
,
2017
).
29.
H. J.
Briegel
and
B. G.
Englert
, “
Quantum optical master equation: The use of damping bases
,”
Phys. Rev. A
47
,
3311
(
1993
).
30.
W. H.
Press
,
S. A.
Teukolsky
,
W. T.
Vetterling
, and
B. P.
Flannery
,
Numerical Recipes
(
Cambridge University Press
,
Cambridge
,
2007
).
31.
A.
Goldberg
,
H.
Schey
, and
J. L.
Schwartz
, “
Computer-generated motion pictures of one-dimensional quantum-mechanical transmission and reflection phenomena
,”
Am. J. Phys.
35
,
177
(
1967
).
32.
M.
Riesch
and
C.
Jirauschek
, “
Analyzing the positivity preservation of numerical methods for the Liouville-von Neumann equation
,”
J. Comput. Phys.
390
,
290
(
2019
).
33.
J.
Thingna
,
D.
Manzano
, and
J.
Cao
, “
Dynamical signatures of molecular symmetries in nonequilibrium quantum transport
,”
Sci. Rep.
6
,
28027
(
2016
).
34.
D. E.
Evans
, “
Irreducible quantum dynamical semigroups
,”
Commun. Math. Phys.
54
,
293
(
1977
).
35.
D. E.
Evans
and
H.
Hance-Olsen
, “
The generators of positive semigroups
,”
J. Funct. Anal.
32
,
207
(
1979
).
36.
B.
Buča
and
T.
Prosen
, “
A note on symmetry reductions of the Lindblad equation: Transport in constrained open spin chains
,”
New J. Phys.
14
,
073007
(
2012
).
37.
V. V.
Albert
and
L.
Jiang
, “
Symmetries and conserved quantities in Lindblad master equations
,”
Phys. Rev. A
89
,
022118
(
2014
).
38.

For this discussion, we use a bipartite system as an example. The extension to a general multipartite system is straightforward.

39.

This is clear for finite systems because in this case scalar space and Hilbert space are the same things. It also happens to be true for infinite spaces.

40.

From now on, we will not write the identity operators of the Hamiltonian parts explicitly when they can be inferred from the context.

41.
A. G.
Redfield
,
Advances in Magnetic and Optical Resonance
(
Elsevier
,
1965
),
1
, Vol. 1, p.
1
.