The theory of open quantum systems is one of the most essential tools for the development of quantum technologies. Furthermore, the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad) master equation plays a key role as it is the most general generator of Markovian dynamics in quantum systems. In this paper, we present this equation together with its derivation and methods of resolution. The presentation tries to be as self-contained and straightforward as possible to be useful to readers with no previous knowledge of this field.
Open quantum system techniques are vital for many studies in quantum mechanics.1–3 This happens because closed quantum systems are just an idealization of real systems (the same happens with closed classical systems), as in Nature nothing can be isolated. In practical problems, the interaction of the system of interest with the environment cannot be avoided and we require an approach in which the environment can be effectively removed from the equations of motion.
The general problem addressed by open quantum theory is sketched in Fig. 1. In the most general picture, we have a total system that conforms a closed quantum system by itself. We are mostly interested in a subsystem of the total one (we call it just “system” instead of “total system”). Therefore, the whole system is divided into our system of interest and an environment. The goal of open quantum theory is to infer the equations of motions of the reduced system from the equation of motion of the total system. For practical purposes, the reduced equations of motion should be easier to solve than the full dynamics of the system. Because of his requirement, several approximations are usually made in the derivation of the reduced dynamics.
One particular, and interesting, case of study is the dynamics of a system connected to several baths modeled by a Markovian interaction. In this case, the most general quantum dynamics is generated by the Lindblad equation (also called Gorini-Kossakowski-Sudarshan-Lindblad equation).4,5 It is difficult to overemphasize the importance of this master equation. It plays an important role in fields as quantum optics,1,6 condensed matter,7–10 atomic physics,11,12 quantum information,13,14 decoherence,15,16 and quantum biology.17–19
The purpose of this paper is to provide basic knowledge about the Lindblad master equation. In Sec. II, the mathematical requirements are introduced, while in Sec. III, there is a brief review of quantum mechanical concepts that are required to understand the paper. Section IV includes a description of a mathematical framework, the Fock-Liouville space (FLS) that is especially useful to work in this problem. In Sec. V, we define the concept of completely positive and trace-preserving maps (CPT-maps), we derive the Lindblad master equation from two different approaches, and we discus several properties of the equation. Finally, Sec. VI is devoted to the resolution of the master equation using different methods. To deepen the techniques of solving the Lindblad equation, an example consisting of a two-level system with decay is analyzed, illustrating the content of every section. The problems proposed are solved by the use of Mathematica notebooks that can be found in Ref. 20.
II. MATHEMATICAL BASIS
The primary mathematical tool in quantum mechanics is the theory of Hilbert spaces. This mathematical framework allows extending many results from finite linear vector spaces to infinite ones. In any case, this tutorial deals only with finite systems and, therefore, the expressions “Hilbert space” and “linear space” are equivalent. We assume that the reader is skilled in operating in Hilbert spaces. To deepen the field of Hilbert spaces, we recommend the book by Debnath and Mikusińki.21 If the reader needs a brief review of the main concepts required for understanding this paper, we may recommend Nielsen and Chuang’s Quantum Computing book.22 It is also required some basic knowledge about infinitesimal calculus, such as integration, derivation, and the resolution of simple differential equations. To help the readers, we have made a glossary of the mostly used mathematical terms. It can also be used as a checklist of terms that the reader should be familiar with.
represents a Hilbert space, usually the space of pure states of a system.
represents a vector of the Hilbert space (a column vector).
represents a vector of the dual Hilbert space of (a row vector).
is the scalar product of vectors |ψ⟩ and |ϕ⟩.
|||ψ⟩|| is the norm of vector |ψ⟩. .
represents the space of bounded operators acting on the Hilbert space .
is the identity operator of the Hilbert space s.t. .
is the operator such that .
is the Hermitian conjugate of the operator .
is a unitary operator iff .
is a Hermitian operator iff H = H†.
is a positive operator iff .
is a projector iff PP = P.
represents the trace of operator B.
represents the space of density matrices, meaning the space of bounded operators acting on with trace 1 and positive.
|ρ⟩⟩ is a vector in the Fock-Liouville space.
is the scalar product of operators in the Fock-Liouville space.
is the matrix representation of a superoperator in the Fock-Liouville space.
III. (VERY SHORT) INTRODUCTION TO QUANTUM MECHANICS
The purpose of this chapter is to refresh the main concepts of quantum mechanics necessary to understand the Lindblad master equation. Of course, this is NOT a full quantum mechanics course. If a reader has no background in this field, just reading this chapter would be insufficient to understand the remaining of this tutorial. Therefore, if the reader is unsure of his/her capacities, we recommend to go first through a quantum mechanics course or to read an introductory book carefully. There are many great quantum mechanics books in the market. For beginners, we recommend Sakurai’s book23 or Nielsen and Chuang’s Quantum Computing book.22 For more advanced students, looking for a solid mathematical description of quantum mechanics methods, we recommend Galindo and Pascual.24 Finally, for a more philosophical discussion, you should go to Peres’ book.25
We start stating the quantum mechanics postulates that we need to understand the derivation and application of the Lindblad master equation. The first postulate is related to the concept of a quantum state.
Associated with any isolated physical system, there is a complex Hilbert space , known as the state space of the system. The state of the system is entirely described by a state vector, which is a unit vector of the Hilbert space .
As quantum mechanics is a general theory (or a set of theories), it does not tell us which is the proper Hilbert space for each system. This is usually done system by system. A natural question to ask is if there is a one-to-one correspondence between unit vectors and physical states, meaning that if every unit vector corresponds to a physical system. This is resolved by the following corollary that is a primary ingredient for quantum computation theory (see Ref. 22, Chapter 7).
All unit vectors of a finite Hilbert space correspond to possible physical states of a system.
A density matrix ρ has unit trace .
A density matrix is a positive matrix ρ > 0.
If we are given a density matrix, it is easy to verify if it belongs to a pure or a mixed state. For pure states, and only for them, Tr[ρ2] = Tr[ρ] = 1. Therefore, if Tr[ρ2] < 1, the system is mixed. The quantity Tr[ρ2] is called the purity of the states, and it fulfills the bounds , being d the dimension of the Hilbert space.
The Hilbert space of a two-level system is just the two-dimensional linear space . Examples of this kind of system are -spins and two-level atoms. We can define a basis of it by the orthonormal vectors, . A pure state of the system would be any unit vector of . It can always be expressed as |ψ⟩ = a|0⟩ + b|1⟩ with s.t. |a|2 + |b|2 = 1.
Once we know the state of a system, it is natural to ask about the possible outcomes of experiments (see Ref. 23, Sec. 1.4).
This postulate allows us to calculate the possible outputs of a system, the probability of these outcomes, as well as the after-measurement state. A measurement usually changes the state, as it can only remain unchanged if it was already in an eigenstate of the observable.
If we have a pure state ψ = a|0⟩ + b|1⟩, the probability of measuring the energy E0 would be P(E0) = |⟨0|ψ⟩|2 = |a|2. The probability of finding E1 would be P(E1) = |⟨1|ψ⟩|2 = |b|2. The expected value of the measurement is ⟨H⟩ = E0|a|2 + E1|b|2.
In the more general case of having a mixed state ρ = ρ00|0⟩⟨0| + ρ01|0⟩⟨1| + ρ10|1⟩⟨0| + ρ11|1⟩⟨1|, the probability of finding the ground state energy is and the expected value of the energy would be .End of example.
Another natural question to ask is how quantum systems evolve. The time-evolution of a pure state of a closed quantum system is given by the Schrödinger equation (see Ref. 24, Sec. 2.9).
The Hamiltonian of a system is the operator corresponding to its energy, and it can be nontrivial to realize.
It is easy to prove that unitary operators preserve the norm of vectors and, therefore, transform pure states into pure states. As we did with the state of a system, it is reasonable to wonder if any unitary operator corresponds to the evolution of a real physical system. The answer is yes.
All unitary evolutions of a state belonging to a finite Hilbert space can be constructed in several physical realizations such as photons and cold atoms.
The proof of this lemma can be found in Ref. 22.
By using the von Neumann equation (13), we can calculate the populations as a function of time. The system is then driven between the states, and the populations present Rabi oscillations, as it is shown in Fig. 2.End of example.
Finally, as we are interested in composite quantum systems, we need to postulate how to work with them.
The state-space of a composite physical system, composed by N subsystems, is the tensor product of the state space of each component . The state of the composite physical system is given by a unit vector of . Moreover, if each subsystem belonging to is prepared in the state |ψi⟩, the total state is given by |ψ⟩ = |ψ1⟩ ⊗ |ψ2⟩ ⊗⋯⊗|ψN⟩.
The symbol ⊗ represents the tensor product of Hilbert spaces, vectors, and operators. If we have a composited mixed state where each component is prepared in the state ρi, the total state is given by ρ = ρ1 ⊗ ρ2 ⊗⋯⊗ ρN.
States that can be expressed in the simple form |ψ⟩ = |ψ1⟩ ⊗ |ψ2⟩, in any specific basis, are very particular and they are called separable states.38 In general, any arbitrary state should be described as |ψ⟩ = ∑i,j|ψi⟩ ⊗ |ψj⟩ (or ρ = ∑i,jρi ⊗ ρj for mixed states). Nonseparable states are called entangled states.
IV. THE FOCK-LIOUVILLE HILBERT SPACE. THE LIOUVILLE SUPEROPERATOR
In this section, we revise a useful framework for both analytical and numerical calculations. It is clear that some linear combinations of density matrices are valid density matrices (as long as they preserve positivity and trace 1). Because of that, we can create a Hilbert space of density matrices just by defining a scalar product.39 This allows us to define a linear space of matrices, converting the matrices effectively into vectors (ρ → |ρ⟩⟩). This is called Fock-Liouville space (FLS). The usual definition of the scalar product of matrices ϕ and ρ is defined as . The Liouville superoperator from Eq. (13) is now an operator acting on the Hilbert space of density matrices. The main utility of the FLS is to allow the matrix representation of the evolution operator.Example 5: Time evolution of a two-level atom.
The density matrix of our system (25) can be expressed in the FLS as
The time evolution of a mixed state is given by the von Neumann equation (13). The Liouvillian superoperator can now be expressed as a matrix,
where each row is calculated just by observing the output of the operation in the computational basis of the density matrices space. The time evolution of the system now corresponds to the matrix equation , which in matrix notation would be
End of example.
V. CPT-MAPS AND THE LINDBLAD MASTER EQUATION
A. Completely positive maps
The problem we want to study is to find the most general Markovian transformation set between density matrices. Until now, we have seen that quantum systems can evolve in two ways by a coherent evolution (postulate 3) and by collapsing after a measurement (postulate 2). Many efforts have been made to unify these two ways of evolving,16 without giving a definite answer so far. It is reasonable to ask what is the most general transformation that can be performed in a quantum system and what is the dynamical equation that describes this transformation.
We are looking for maps that transform density matrices into density matrices. We define as the space of all density matrices in the Hilbert space . Therefore, we are looking for a map of this space onto itself, . To ensure that the output of the map is a density matrix, this should fulfill the following properties:
Trace preserving. , .
Completely positive (see below).
Any map that fulfills these two properties is called a completely positive and trace-preserving map (CPT-maps). The first property is quite apparent, and it does not require more thinking. The second one is a little more complicated, and it requires an intermediate definition.
A map is positive iff s.t. .
This definition is based on the idea that, as density matrices are positive, any physical map should transform positive matrices into positive matrices. One could naively think that this condition must be sufficient to guarantee the physical validity of a map; it is not. As we know, there exist composite systems, and our density matrix could be the partial trace of a more complicated state. Because of that, we need to impose a more general condition.
A map is completely positive iff , is positive.
B. Derivation of the Lindblad equation from microscopic dynamics
The most common derivation of the Lindblad master equation is based on open quantum theory. The Lindblad equation is then an effective motion equation for a subsystem that belongs to a more complicated system. This derivation can be found in several textbooks such as Breuer and Petruccione’s2 as well as Gardiner and Zoller’s.1 Here, we follow the derivation presented in Ref. 26. Our initial point is displayed in Fig. 3. A total system belonging to a Hilbert space is divided into our system of interest, belonging to a Hilbert space , and the environment living in .
The evolution of the total system is given by the von Neumann equation (13),
As we are interested in the dynamics of the system, without the environment, we trace over the environment degrees of freedom to obtain the reduced density matrix of the system ρ(t) = TrE[ρT]. To separate the effect of the total Hamiltonian in the system and the environment, we divide it in the form , with , , and , and being α a measure of the strength of the system-environment interaction. Therefore, we have a part acting on the system, a part acting on the environment, and the interaction term. Without losing any generality, the interaction term can be decomposed in the following way:
with and .40
To better describe the dynamics of the system, it is useful to work in the interaction picture (see Ref. 24 for a detailed explanation about Schrödinger, Heisenberg, and interaction pictures). In the interaction picture, density matrices evolve with time due to the interaction Hamiltonian, while operators evolve with the system and environment Hamiltonian. An arbitrary operator is represented in this picture by the time-dependent operator Ô(t), and its time evolution is
The time evolution of the total density matrix is given in this picture by
This equation can be easily integrated to give
By this formula, we can obtain the exact solution, but it still has the complication of calculating an integral in the total Hilbert space. It is also a troublesome fact that the state depends on the integration of the density matrix in all previous time. To avoid that, we can introduce Eq. (37) into Eq. (36) giving
By applying this method one more time, we obtain
After this substitution, the integration of the previous states of the system is included only in the terms that are O(α3) or higher. At this moment, we perform our first approximation by considering that the strength of the interaction between the system and the environment is small. Therefore, we can avoid high-orders in Eq. (39). Under this approximation, we have
We are interested in finding an equation of motion for ρ, so we trace over the environment degrees of freedom,
This is not a closed time-evolution equation for because the time derivative still depends on the full density matrix . To proceed, we need to make two more assumptions. First, we assume that at t = 0 the system and the environment have a separable state in the form ρT(0) = ρ(0) ⊗ ρE(0). This means that there are not correlations between the system and the environment. This may be the case if the system and the environment have not interacted at previous times or if the correlations between them are short-lived. Second, we assume that the initial state of the environment is thermal, meaning that it is described by a density matrix in the form , being T the temperature and taking the Boltzmann constant as kB = 1. By using these assumptions, and the expansion of HI(34), we can calculate an expression for the first element of the rhs of Eq. (41),
To calculate the explicit value of this term, we may use that for all values of i. This looks like a strong assumption, but it is not. If our total Hamiltonian does not fulfill it, we can always rewrite it as , with . It is clear that now , with , and the system Hamiltonian is changed just by the addition of an energy shift that does not affect the system dynamics. Because of that, we can assume that for all i. Using the cyclic property of the trace, it is easy to prove that the term of Eq. (42) is equal to zero and the equation of motion (41) reduces to
This equation still includes the entire state of the system and environment. To unravel the system from the environment, we have to make a more restrictive assumption. As we are working in the weak coupling regime, we may suppose that the system and the environment are noncorrelated during all the time evolution. Of course, this is only an approximation. Due to the interaction Hamiltonian, some correlations between system and environment are expected to appear. On the other hand, we may assume that the time scales of correlation (τcorr) and relaxation of the environment (τrel) are much smaller than the typical system time scale (τsys), as the coupling strength is very small (α≪). Therefore, under this strong assumption, we can assume that the environment state is always thermal and is decoupled from the system state, . Equation (43) then transforms into
The equation of motion is now independent for the system and local in time. It is still non-Markovian, as it depends on the initial state preparation of the system. We can obtain a Markovian equation by realizing that the kernel in the integration decays fast enough and that we can extend the upper limit of the integration to infinity with no real change in the outcome. By doing so, and by changing the integral variable to s → t − s, we obtain the famous Redfield equation41
It is known that this equation does not warrant the positivity of the map, and it sometimes gives rise to density matrices that are nonpositive. To ensure complete positivity, we need to perform one further approximation, the rotating wave approximation. To do so, we need to use the spectrum of the superoperator , . The eigenvectors of this superoperator form a complete basis of space , and, therefore, we can expand the system-environment operators from Eq. (34) in this basis,
where the operators Si(ω) fulfill
being ω the eigenvalues of . It is easy to take also the Hermitian conjugated
To apply this decomposition, we need to change back to the Schrödinger picture for the term of the interaction Hamiltonian acting on the system’s Hilbert space. This is done by the expression k = eitHSke−itH. By using the eigenexpansion (46), we arrive to
To combine this decomposition with Redfield equation (45), we first may expand the commutators,
We now apply the eigenvalue decomposition in terms of Sk(ω) for ĤI(t − s) and in terms of for ĤI(t). By using the permutation property of the trace and the fact that , and after some nontrivial algebra, we obtain
where the effect of the environment has been absorbed into the factors,
where we are writing the environment operators of the interaction Hamiltonian in the interaction picture (). At this point, we can already perform the rotating wave approximation. By considering the time-dependency on Eq. (51), we conclude that the terms with will oscillate much faster than the typical time scale of the system evolution. Therefore, they do not contribute to the evolution of the system. In the low-coupling regime (α → 0), we can consider that only the resonant terms, ω = ω′, contribute to the dynamics and remove all the others. By applying this approximation to Eq. (51) reduces to
To divide the dynamics into Hamiltonian and non-Hamiltonian, we now decompose the operators Γkl into Hermitian and non-Hermitian parts, , with
By these definitions, we can separate the Hermitian and non-Hermitian parts of the dynamics and we can transform back to the Schrödinger picture,
The Hamiltonian dynamics now is influenced by a term . This is usually called a Lamb shift Hamiltonian, and its role is to renormalize the system energy levels due to the interaction with the environment. Equation (55) is the first version of the Markovian master equation, but it is not in the Lindblad form yet.
It can be easily proved that the matrix formed by the coefficients γkl(ω) is positive as they are the Fourier’s transform of a positive function . Therefore, this matrix can be diagonalized. This means that we can find a unitary operator, O, s.t.
We can now write the master equation in a diagonal form,
This is the celebrated Lindblad (or Lindblad-Gorini-Kossakowski-Sudarshan) master equation. In the simplest case, there will be only one relevant frequency ω and the equation can be further simplified to
The operators Li are usually referred to as jump operators.
C. Derivation of the Lindblad equation as a CPT generator
The second way of deriving the Lindblad equation comes from the following question: What is the most general (Markovian) way of mapping density matrix onto density matrices? This is usually the approach from quantum information researchers that look for general transformations of quantum systems. We analyze this problem following mainly Ref. 28.
To start, we need to know what is the form of a general CPT-map.
Any map that can be written in the form with is positive.
The proof of the lemma requires a little algebra and a known property of normal matrices.
If , with . Therefore, . Therefore, if is positive, the output of the map is also positive.End of the proof.
This is a sufficient condition for the positivity of a map, but it is not necessary. It could happen that there are maps that cannot be written in this form, but they are still positive. To go further, we need a more general condition and this comes in the form of the following theorem:
The proof of this theorem requires some algebra.
The “if” implication is a trivial consequence of Lemma 2. To prove the converse, we need to extend the dimension of our system by the use of an auxiliary system. If d is the dimension of the Hilbert space of pure states, , we define a new Hilbert space of the same dimension .
Thanks to Choi’s theorem, we know the general form of CP-maps, but there is still an issue to address. As density matrices should have trace one, we need to require any physical maps to be also trace-preserving. This requirement gives as a new constraint that completely defines all CPT-maps. This requirement comes from the following theorem:
Operators Vi of a map fulfilling condition (71) are called Krauss operators. Because of that, sometimes CPT-maps are also called Krauss maps, especially when they are presented as a collection of Krauss operators. Both concepts are ubiquitous in quantum information science. Krauss operators can also be time-dependent as long as they fulfill relation (71) for all times.
D. Properties of the Lindblad master equation
Some interesting properties of the Lindblad equation are the following:
Under a Lindblad dynamics, if all the jump operators are Hermitian, the purity of a system fulfills . The proof is given in A.
The Lindblad master equation is invariant under unitary transformations of the jump operators,
with representing a unitary matrix. It is also invariant under inhomogeneous transformations in the form
Thanks to the previous properties, it is possible to find traceless jump operators without loss of generality.
Example 6: A master equation for a two-level system with decay.
Continuing our example of a two-level atom, we can make it more realistic by including the possibility of atom decay by the emission of a photon. This emission happens due to the interaction of the atom with the surrounding vacuum state. (This is why atoms decay.) The complete quantum system would be in this case the “atom + vacuum” system, and its time evolution should be given by the von Neumann equation (13), where H represents the total “atom + vacuum” Hamiltonian. This system belongs to an infinite-dimension Hilbert space, as the radiation field has infinite modes. If we are interested only in the time dependence of the state of the atom, we can derive a Markovian master equation for the reduced density matrix of the atom (see for instance, Refs. 1 and 2). The master equation we will study is
where Γ is the coupling between the atom and the vacuum.
Expressing explicitly the set of differential equations, we obtain
End of example.
VI. RESOLUTION OF THE LINDBLAD MASTER EQUATION
To calculate the time evolution of a system determined by a master equation in the form (96), we need to solve a set of equations with as many equations as the dimension of the density matrix. In our example, this means to solve a 4 variable set of equations, but the dimension of the problem increases exponentially with the system size. Because of this, for bigger systems, techniques for dimension reduction are required.
To solve systems of partial differential equations, there are several canonical algorithms. This can be done analytically only for a few simple systems and by using sophisticated techniques as damping bases.29 In most cases, we have to rely on numerical approximated methods. One of the most popular approaches is the 4th-order Runge-Kutta algorithm (see, for instance, Ref. 30, for an explanation of the algorithm). By integrating the equations of motion, we can calculate the density matrix at any time t.
The steady-state of a system can be obtained by evolving it for a long time . Unfortunately, this method presents two difficulties. First, if the dimension of the system is big, the computing time would be huge. This means that for systems beyond a few qubits, it will take too long to reach the steady-state. Even worse is the problem of stability of the algorithms for integrating differential equations. Due to small errors in the calculation of derivatives by the use of finite differences, the trace of the density matrix may not be constantly equal to one. This error accumulates during the propagation of the state, giving nonphysical results after a finite time. One solution to this problem is the use of algorithms specifically designed to preserve the trace, as the Crank-Nicholson algorithm.31 The problem with this kind of algorithm is that they consume more computational power than Runge-Kutta, and therefore they are not useful to calculate the long-time behavior of big systems. An analysis of different methods and their advantages and disadvantages can be found in Ref. 32.Example 7: Time dependency of the two-level system with decay.
In this section, we show some results of solving Eq. (96) and calculating the density matrix as a function of time. A Mathematica notebook solving this problem can be found in Ref. 20. To illustrate the time behavior of this system, we calculate the evolution for different state parameters. In all cases, we start with an initial state that represents the state being excited ρ11 = 1, with no coherence between different states, meaning ρ01 = ρ10 = 0. If the decay parameter Γ is equal to zero, the problem reduces to solve the von Neumann equation, and the result is displayed in Fig. 2. The other extreme case would be a system with no coherent dynamics (Ω = 0) but with decay. In this case, as shown in Fig. 4, we observe an exponential decay of the population of the excited state.
Finally, we can calculate the dynamics of a system with both coherent driving and decay. In this case, both behaviors coexist and there are oscillations and decay. This is displayed in Fig. 5.End of example.
As we have discussed before, in the Fock-Liouville space, the Liouvillian corresponds to a matrix (in general complex, non-Hermitian, and nonsymmetric). By diagonalizing it, we can calculate both the time-dependent and the steady-state of the density matrices. For most purposes, in the short time regime, integrating the differential equations may be more efficient than diagonalizing. This is due to the high dimensionality of the Liouvillian that makes the diagonalization process very costly in computing power. On the other hand, in order to calculate the steady-state, the diagonalization is the mostly used method due to the problems of integrating the equation of motions discussed in Sec. VI A.
Let us see first how we use diagonalization to calculate the time evolution of a system. As the Liouvillian matrix is non-Hermitian, we cannot apply the spectral theorem to it and it may have different left and right eigenvectors. For a specific eigenvalue Λi, we can obtain the eigenvectors and s.t.
An arbitrary system can be expanded in the eigenbasis of as1,33
Therefore, the state of the system at a time t can be calculated in the form
Note that in this case to calculate the state a time t we do not need to integrate into the interval , as we have to do if we use a numerical solution of the differential set of equations. This is an advantage when we want to calculate long-time behavior. Furthermore, to calculate the steady-state of a system, we can look to the eigenvector that has zero eigenvalue, as this is the only one that survives when t → ∞.
For any finite system, Evans’ theorem ensures the existence of at least one zero eigenvalue of the Liouvillian matrix.34,35 The eigenvector corresponding to this zero eigenvalue would be the steady-state of the system. In exceptional cases, a Liouvillian can present more than one zero eigenvalues due to the presence of symmetry in the system.26,27,36 This is a nongeneric case, and for most purposes, we can assume the existence of a unique fixed point in the dynamics of the system. Therefore, diagonalizing can be used to calculate the steady-state without calculating the full evolution of the system. This can be done even analytically for small systems, and when numerical approaches are required, this technique gives better precision than integrating the equations of motion. The spectrum of Liouvillian superoperators has been analyzed in several recent papers.33,37Example 8: Spectrum-analysis of the Liouvillian for the two-level system with decay.
Here, we diagonalize (97) and obtain its steady state. A Mathematica notebook solving this problem can be downloaded from Ref. 20. This specific case is straightforward to diagonalize as the dimension of the system is very low. We obtain 4 different eigenvalues, two of them are real while the other two form a conjugated pair. Figure 6 displays the spectrum of the superoperator given in (97).
As there is only one zero eigenvalue, we can conclude that there is only one steady-state, and any initial density matrix will evolve to it after an infinite-time evolution. By selecting the right eigenvector corresponding to the zero-eigenvalue and normalizing it we obtain the density matrix. This can be done even analytically. The result is the matrix,
End of example.
The author wants to acknowledge the Spanish Ministry and the Agencia Española de Investigación (AEI) for financial support under Grant No. FIS2017-84256-P (FEDER funds).
APPENDIX: PROOF OF
In this Appendix, we proof that under the Lindblad dynamics given by Eq. (93), the purity of a density matrix fulfills that if all the jump operators of the Lindblad dynamics are Hermitian.
We start just by interchanging the trace and the derivative. As the trace is a linear operation, it commutes with the derivation, and we have
where we have used the cyclic property of the trace operator. (This property is used along all the demonstration without explicitly mentioning it.) By inserting the Lindblad Eq. (93) into the rhs of (A1), we obtain
The first term is zero. Therefore, the inequality we want to prove becomes equivalent to
As the density matrix is Hermitian, we can diagonalize it to obtain its eigenvalues () and its corresponding eigenvectors (|Λi⟩). The density matrix is diagonal in its own eigenbasis and can be expressed as (this eigenbasis changes with time, of course, but the proof is valid as the inequality should be fulfilled at any time)
where we assume an ordering of the eigenvalues in the form Λ0 ≥ Λ1 ≥ ⋯ ≥Λd.
We rename the jump operators in this basis as a. Expanding each term of the inequality (A3) in this basis, we obtain
where we have introduced the coefficients . As the operators Lk are Hermitian, these coefficients fulfill .
The second term is expanded as
where we have used the closure relation in the density matrix eigenbasis, . The inequality can now be written as
As xij = xji, we can reorder the ij sum in the following way:
Therefore, we can reduce the proof of this inequality to the proof of a set of inequalities,
For this discussion, we use a bipartite system as an example. The extension to a general multipartite system is straightforward.
This is clear for finite systems because in this case scalar space and Hilbert space are the same things. It also happens to be true for infinite spaces.
From now on, we will not write the identity operators of the Hamiltonian parts explicitly when they can be inferred from the context.