Classical data analysis requires computational efforts that become intractable in the age of Big Data. An essential task in time series analysis is the extraction of physically meaningful information from a noisy time series. One algorithm devised for this very purpose is singular spectrum decomposition (SSD), an adaptive method that allows for the extraction of narrow-banded components from non-stationary and non-linear time series. The main computational bottleneck of this algorithm is the singular value decomposition (SVD). Quantum computing could facilitate a speedup in this domain through superior scaling laws. We propose quantum SSD by assigning the SVD subroutine to a quantum computer. The viability for implementation and performance of this hybrid algorithm on a near term hybrid quantum computer is investigated. In this work, we show that by employing randomized SVD, we can impose a qubit limit on one of the circuits to improve scalibility. Using this, we efficiently perform quantum SSD on simulations of local field potentials recorded in brain tissue, as well as GW150914, the first detected gravitational wave event.

A recently proposed approach to classical time series analysis is singular spectrum decomposition (SSD).1 SSD aims to decompose non-linear and non-stationary time series into physically meaningful components. In stark contrast to a Fourier decomposition which employs a basis set of harmonic functions, SSD prepares an adaptive basis set of functions, whose elements depend uniquely on the time series. Applications include, but are not limited to, sleep apnea detection from an unprocessed electrocardiogram (ECG)2 and the processing and analysis of brain waves.3,4 In this work, we will also cover application to gravitational wave analysis. The latter is interesting as real-time data analysis poses a challenge, and will be vital for next generations of gravitational wave detection technology. The main bottleneck for scaling this algorithm up for larger time series is the singular value decomposition (SVD) that is performed in each subroutine. For general m × n matrices, the classical SVD algorithm yields a computational complexity of O ( m n min ( m , n ) ).5 

Certain matrix operations are more efficiently performed on a quantum computer, raising the question whether a quantum speedup can be acquired for SVD applications. Currently, quantum computing is in the Noisy Intermediate Scale Quantum (NISQ) era.6 While some forms of error mitigation are proposed, noise still limits the amount of qubits and the fidelity of gate operations, such that their numbers are too little for full-fledged quantum computers running quantum error correction codes.7 Despite these shortcomings, hybrid quantum computers running variational quantum algorithms provide a useful platform for classically taxing calculations, one major example being the variational quantum eigensolver (VQE) for quantum chemistry.8–10 Hybrid algorithms complement the strengths of both processors: classically efficient tasks are performed on a classical computer, while unitary calculations such as quantum state preparation are handed to a quantum computer. Because of the iterative nature of variational algorithms, noise can be mitigated at every step, increasing robustness of the algorithm.

The SVD subroutine can be performed on a quantum computer through the variational quantum singular value decomposition (VQSVD) algorithm.11 In addition to its application to SSD, SVD forms a compelling application of hybrid quantum computing in general, because of its versatile utility. In a similar fashion to VQE, one can train two parameterizable quantum circuits (PQCs) to learn the left and right singular vectors of a matrix, and sample the singular values through Hadamard tests. In this work, we explore both a gate-based paradigm,9 where the parameterization is facilitated through rotational quantum gates, and a pulse-based paradigm,12 where we optimize laser pulse profiles on a neutral atom quantum computing platform. The latter is derived from quantum optimal control (QOC) theory, and is thought to be more NISQ efficient. Other quantum algorithms for SVD have been proposed,13,14 but require fault-tolerant resources such as quantum random access memory or oracles.

Our aim is to convert SSD into a variational hybrid quantum-classical algorithm called quantum singular spectrum decomposition (QSSD). Here, we pose the question whether quantum computing can provide an inherent computational speedup for SVD by surpassing the classical complexity. We employ randomized SVD to improve scalability of the algorithm.15 The classical theory behind SSD is introduced in Sec. II. The implementation of QSSD on a hybrid quantum computer is elaborated on in Sec. III, using a gate-based approach, while in Sec. IV we discuss a pulse-based approach. Simulation results for three different time series, among which the first observation of a gravitational wave, are presented in Sec. V. We conclude with a summary and discussion about the viability of QSSD in Sec. VI.

The SSD algorithm is based on the singular spectrum analysis (SSA) method, whose basic ideas are presented in  Appendix A.1,16 SSA is designed to explore the presence of specific patterns within a time series, while SSD is designed to provide a decomposition into its constituents components. Let x ( n ) = { x 1 , x 2 , , x N 1 , x N } be an N-dimensional string of numbers, sampled at a uniform sampling frequency FS from some time series. We are interested in extracting (physically) meaningful components from the signal. By embedding the signal in a matrix, correlation functions between pairs of sample points can be extracted through a singular value decomposition of said matrix. Several types of components can be found, among which trends, oscillations, and noise for instance. In SSD, such identification is completely automated by focusing on extracting narrow frequency band components.

The time series is first embedded in a trajectory matrix; however, in comparison with SSA, the number of columns is extended to N and the time series is wrapped around, yielding
(1)
It is to be understood that every subscript is mod N. The singular value decomposition (SVD) of such a matrix yields
(2)
where σi is the singular value, and u i and v i are the left and right singular vectors. As explained above, SSD aims to find physically meaningful components g j ( n ) through a data-driven and iterative approach. Starting from the original signal x(n), at each iteration, a component g ( j ) ( n ) is extracted, such that a residue
(3)
remains, where x ( n ) = def v ( 0 ) ( n ).

For iteration j = 1:

The power spectral density (PSD) of the signal is calculated. The frequency f max associated with the dominant peak in the PSD is then estimated. If the ratio criterion ( f max / F S ) < δ is met, a trend is said to characterize the spectral contents of the residual. Reference 1 sets δ = 10 3. In such a case, the embedding dimension is set to M = N / 3 , as described in Ref. 17. The first component g ( 1 ) ( n ) is then extracted from Hankelization of the matrix X 1 = σ 1 u 1 v 1 . Since most of the energy must be contained within the trend component, only 1 singular value is required for its reconstruction. If the ratio criterion is not met, the window length is defined by M = 1.2 ( F S / f max ) .1 

For iterations j 2:

For all iterations j 2, the embedding dimension is always set equal to M = 1.2 ( F S / f max ) . To extract meaningful components, only principal components with a dominant frequency in the band [ f max δ f , f max + δ f ] are selected. In order to estimate the value of the peak width δf, the Levenberg–Marquardt (LM) algorithm is used. First, the PSD of the residual is approximated with a sum of three Gaussian functions
(4)
where A is the vector of amplitudes, σ is the vector of Gaussian peak widths, and μi are the peak locations, given by
(5)
where f2 is the frequency at which the second most dominant peak is situated. Starting the algorithm from the initial state vector components
(6)
(7)
the algorithm will eventually converge toward optimal A , σ . The peak width is then given by δ f = 2.5 σ 1 . Finally, the component g ( j ) ( n ) is rescaled with a factor a, so that g ̃ ( j ) ( n ) = a g ( j ) ( n ). Such rescaling makes sure the variance of the residue is minimized. Solving for
(8)
yields a = [ g ( j ) , v ( j 1 ) ] / [ g ( j ) , g ( j ) ].1 The LM algorithm stops when the energy of the remainder has been reduced to a predefined threshold (default at 1%), ensuring proper convergence of SSD. This corresponds to the criterion
(9)
so that the reconstructed signal reads
(10)
The singular value decomposition (SVD) of a general m × n matrix X m × n is given by
(11)
where U m × m and V n × n are unitary square matrices, and Σ = diag ( σ 1 , σ 2 , , σ r ) m × n is a diagonal matrix with all r = rank ( X ) singular values ordered from highest to lowest on the diagonal. Through a suitable loss function, Wang et al.11 have shown that this decomposition can be implemented on a quantum computer by training two quantum circuits to learn the unitary matrices U ( α ) and V ( β ), parameterized by two sets of trainable parameters α and β. The flowchart diagram for performing the SVD in a hybrid way is given in Fig. 1. For real matrices, U and V will be orthogonal, which can be efficiently prepared on a quantum computer since SO ( N ) SU ( N ). We focus on this particular problem since data matrices are real. Through inversion of Eq. (11), one finds that the singular values can be extracted from
(12)
where u i | is the Hermitian conjugate of the ith column of U, and | v i is the ith column of V. The singular values and their associated left and right orthonormal singular vectors (called eigentriples) allow for a reconstruction of the original matrix X through
(13)
It is desirable to keep only the first few T eigentriples such that they approximate the matrix X to precision ϵ. Suppose that X ( T ) = i = 1 T σ i | u i v i |, then we find through the Cauchy–Schwarz inequality that the Frobenius norm of the difference is bounded by the squared sum of neglected singular values according to
(14)
More particularly, the Eckart–Young theorem states that this approximation minimizes this error norm and is, therefore, the optimal matrix approximation.18 
In order to prepare inner products of the form (12) on a quantum computer, we must perform a unitary decomposition, coined a linear combination of unitaries (LCU), of the trajectory matrix,
(15)
for K unitaries ei. Because of the non-square nature of X, we delay the discussion on the implementation of this decomposition to Sec. III F. This set of unitaries is closely related to Pauli operator strings of the form σ i 1 ( i ) σ i 2 ( i ) , but different choices are possible. Throughout this paper, we adopt the notation i j { 0 , 1 , 2 , 3 } = { 0 , x , y , z } and
(16)
Since m-qubit states are elements of a 2 m-dimensional Hilbert space H m = span { | 0 , | 1 } m 2 m, the dimensions of the trajectory matrix must be enforced to be equal to a power of 2. The optimal dimensions of the trajectory matrix X, as dictated by SSD, do not necessarily satisfy these constraints, so that we slightly alter the trajectory matrix dimensions for a VQSVD implementation. First, the optimal embedding dimension is rounded to the nearest power of two. Then, the number of columns is increased to the first power of two through periodic continuation, where the signal is artificially extended through a wrap-around. Defining
(17)
the qubit numbers of the U ( α ) and V ( β ) circuits are given by qm and q n = log 2 ( N ) , respectively. The trajectory matrix then takes the form
(18)

Again, it is to be understood that every index is mod N. While not obeying optimal SSD dimensions, this definition makes the most use of the space constrained by the dimensionality restrictions. Additionally, this definition reduces the number of unitaries Ui required for the LCU in Eq. (15), compared to embedding the trajectory matrix in a large null matrix.

To obtain a more economic SVD routine, we approximate the SVD of the m × n trajectory matrix X in a low-rank fashion, such that we maintain only the first k singular values,
(19)
where Σk is the singular value matrix, with σ p = 0 for all p > k. The computational complexity yields O ( mnk ), which is a “lower” complexity than the conventional O ( m n min ( m , n ) ) for k n. By performing this low-rank approximation, we hope to improve the scaling laws of the QSSD algorithm and mitigate the two major problems which persist within the theory laid out by Ref. 11: the input problem and the output problem. The former refers to the fact that the decomposition of a matrix in a Pauli basis requires us to have a basis set that grows exponentially in the number of qubits. The latter refers to the read-out of the quantum circuits to obtain the matrices U and V, which is also an operation whose computation time is exponential in the number of qubits.
First, we aim to find an optimal basis Q with as few columns as possible, such that19 
(20)
This optimal basis can be approximated through a randomized matrix decomposition. Multiplication of the trajectory matrix by a random matrix
(21)
where the vectors ωi are drawn from the standard distribution N ( μ = 0 , σ = 1 ), yields Y = X Ω. This allows for a random sampling of range ( X ). Performing a QR-decomposition on Y, and trimming the Q-matrix to a size Q max ( m , n ) × k, gives the optimal basis Q. This matrix encompasses a basis transformation that lifts X to a lower-dimensional space where the SVD is approximated up to the kth value. Constructing this transformation gives B = Q X, whose SVD yields
(22)

We can establish a natural qubit cutoff number that defines a low-dimensional space onto which our trajectory matrix is projected. Because we focus on narrow frequency bands in the PSD, we can predict that we do not need plenty of singular values to reconstruct a signal. Every dominant frequency component yields 2 singular values, and simulations have shown that to obtain a faithful reconstruction of the corresponding signal, we need at most a number of singular values close to 10 12. Therefore, we establish a dimensional cutoff k k Λ = 16 with associated qubit cutoff q = 4. Henceforth, we shall denote the reduced trajectory matrix as X.

Fig. 1.

Flowchart representation of the hybrid VQSVD algorithm. As an input, we choose the trajectory matrix X, decomposed into a non-square basis provided in Sec. III F. Two parameterizable quantum circuits U ( α ) and V ( β ) are initiated and subsequently trained to learn the set of left and right singular vectors (blue block). To achieve this, a hybrid loop is initiated (orange block): on a quantum processing unit, Hadamard tests approximate the singular values, while a classical loss function optimization routine updates the variational parameters. Once the optimization procedure has hit a threshold convergence condition, all relevant singular vectors are read out by sending in computational basis states through U ( α ) and V ( β ) to extract all columns, and all estimated singular values σj are given by the final set of Hadamard tests. In Sec. IV, we employ a pulse-based approach to approximate the left and right singular matrices (blue block).

Fig. 1.

Flowchart representation of the hybrid VQSVD algorithm. As an input, we choose the trajectory matrix X, decomposed into a non-square basis provided in Sec. III F. Two parameterizable quantum circuits U ( α ) and V ( β ) are initiated and subsequently trained to learn the set of left and right singular vectors (blue block). To achieve this, a hybrid loop is initiated (orange block): on a quantum processing unit, Hadamard tests approximate the singular values, while a classical loss function optimization routine updates the variational parameters. Once the optimization procedure has hit a threshold convergence condition, all relevant singular vectors are read out by sending in computational basis states through U ( α ) and V ( β ) to extract all columns, and all estimated singular values σj are given by the final set of Hadamard tests. In Sec. IV, we employ a pulse-based approach to approximate the left and right singular matrices (blue block).

Close modal
We prepare two quantum circuits that learn to approximate the matrices of left [ U ( α )] and right [ V ( β )] singular vectors of the randomized trajectory matrix X simultaneously. Let N and U ( θ ) represent the qubit number and the unitary circuit representation of one circuit, agnostic to which one we refer to. For both circuits, we initialize the qubits in the vacuum state: | ψ 0 = | vac = def | 0 N and employ the hardware efficient ansatz (HEA),
(23)
where D represents the depth of the circuit, U ent SU ( 2 N ) is a general entanglement operator over at most N qubits, and R Y ( θ ) is a rotational operator generated by the σy Pauli operator,
(24)
We opt for this ansatz rather than a set of Euler rotations around the Bloch sphere, since this approach generates real amplitudes which are sufficient for the construction of orthonormal matrices, while simultaneously requiring a lower-dimensional parameter space. We choose a CNOT chain as our entanglement operation,
(25)
Knowing the columns of U and V, the maximum singular value is retrieved from
(26)
where S i , i { m , n } is the set of normalized vectors of length m and n, respectively. Other singular values are retrieved through a similar fashion, where the vectors | u and | v are restricted to be orthogonal to previous ones. This yields
(27)
where
(28)
Through the Ky Fan theorem,20 we arrive at a suitable loss function for VQSVD, given by a linear combination of the measured singular values,
(29)
where w 1 > w 2 > > w T > 0 form a set of ordered real-valued weights and { | ψ j ( U ) } j = 1 T and { | ψ j ( V ) } j = 1 T are sets of orthonormal vectors that select the right columns and vectors from U and V, respectively. Here, Wang et al.'s linear parameterization w j = T + 1 j is adopted.11 Naturally, optimization aims to find optimal α , β such that
(30)
The terms e { ψ j ( U ) | U ( α ) X V ( β ) | ψ j ( V ) } can be measured on a quantum computer through a Hadamard test, if the matrix X is decomposed into a basis of unitary operators { e i } i such that X = i x i e i.
Quantum computing usually deals with square matrices, since they are natural dimensions of operators in a Hilbert space. Decomposition of the trajectory matrix requires us to find a decomposition in a non-square basis, however. We can do so, if we relax the condition of unitarity of U to
(31)
so that we find a non-square pseudo-unitary basis decomposition. Because we adapted new optimal dimensions for the trajectory matrix to conform to quantum computing, we can construct a pseudo-Pauli basis,
(32)
where Pi is the set of all possible Pauli operators of string length qm, and O is the 2 q m × 2 q m null matrix. It is readily shown that every ei is pseudo-unitary and that together, they form a complete basis for any 2 q m × 2 q n trajectory matrix, so that we can decompose X according to
(33)
where the expansion coefficients read
(34)
Implementation of these basis elements in a Hadamard test still requires square matrices. We employ pseudo-unitary continuation to easily implement these elements by extending them to square matrices that have an easy tensor decomposition in a square Pauli basis, a protocol which is provided in  Appendix B. Because the pseudo-Hadamard tests employ matrices U and V of different size, operating on different numbers of qubits, we must alter the controlled circuit subroutine of the Hadamard test as well. This is displayed in Fig. 2. U must be altered through
(35)
because it only acts on the last qm qubits.
Fig. 2.

The circuit to perform a Hadamard test as given in Ref. 11 (a) versus the Hadamard test circuit we employed for non-square matrix decompositions (b). H represents the Hadamard gate, and the number of qubits in the bottom register equals qn. e j cont is the pseudo-unitary continuation of the basis element ej, as given in  Appendix B.

Fig. 2.

The circuit to perform a Hadamard test as given in Ref. 11 (a) versus the Hadamard test circuit we employed for non-square matrix decompositions (b). H represents the Hadamard gate, and the number of qubits in the bottom register equals qn. e j cont is the pseudo-unitary continuation of the basis element ej, as given in  Appendix B.

Close modal
For the classical optimization routine, we employ an optimizer based on analytical gradients with adaptive Armijo conditions. Such optimizers circumvent finite difference inaccuracies by directly sampling the loss function at a point in parameter space through a parameter shift.21 The gradient of the loss function L is given by its entries
(36)
(37)
where e μ , e ν are unit vectors along the α μ / β ν direction in parameter space. A brief proof is provided in  Appendix C. At every iteration k, we update the variational parameters along its gradient. The step size η ( 0 , 1 ) is determined at every iteration and adapts to the loss function evaluation as well as past iterations. Generally, the parameter update reads
(38)
At every iteration, we check for the first Wolfe condition, given by
(39)
for some control parameter μ ( 0 , 1 ). If the condition is satisfied, we keep the step size equal to η, otherwise the step size is halved and the condition is checked again. Additionally, we introduce a tracker mechanism τ that can rescale the step size dependent on past steps. Initially set to 0, we update the tracker through τ τ + 1 if the Wolfe condition is met, otherwise, we set τ τ 1. If the tracker hits a threshold ± τ , it is reset to 0, and the step size is doubled or halved. Throughout our work, we adopt an initial step size of η 0 = 10 6, a control parameter μ = 10 4 and a threshold of τ = 3.
In order to reconstruct Hankel matrices from i σ i | u i v i |, we must perform qubit state tomography over both circuits. Each column of the circuit matrix representation can be read out by sending the orthonormal set { | ψ i } i through the circuits, and sampling all Pauli moments. Such approach is known to require an exponential number of samplings in the number of qubits. Matrix elements of the form (12) are unprotected against global phase invariance, though. In contrast, the expectation value of a Hermitian operator H,
(40)
is invariant under | ψ e i φ | ψ for arbitrary phases φ. The columns of U and V allow relative phase differences of k · π for k , so that VQSVD finds local maxima where the singular values can be measured to be
(41)
If the ith singular value is measured to be negative, the sign needs to be flipped, and the ith column of U gains an additional minus sign as well.

In addition to the gate paradigm, variational quantum optimal control (VQOC)12 offers an alternative approach to searching through a Hilbert space H (see Ref. 22 for a review on QOC in general). Rather than optimizing parameterized gates, we directly optimize the physical controls through which quantum operations are implemented. It is believed that for highly entangled systems, VQOC is able to outperform VQE in terms of accuracy, when resources are compatible. This is a result of the higher expressibility of optimal control, which one can prove by showing that a set of randomly initialized pulses produces unitary operations that resemble the Haar distribution more closely than a set of randomly initialized gates in the hardware efficient ansatz.23 

In this paper, we consider application to a neutral atom quantum computer that is realized on a lattice of trapped Rydberg atoms, whose electronic state manifold offers an embedding of qubit states | 0 and | 1 . Consider 2 linear arrays of, respectively, qm and qn atoms, which are treated as 2-level systems where the | 0 -state is mapped to a hyperfine electronic ground state, and the | 1 -state is mapped to a Rydberg state. Qubit states are addressed by monochromatic laser pulses, which we aim to optimize in order to find an optimal pulse that approximates the target state through propagators U ( t ) L ( H q m ) and V ( t ) L ( H q n ). For a total evolution time T and t ( 0 , T ), the circuit unitaries are subject to the propagator Schrödinger equation,
(42)
(43)
Here, Hi, i { U , V } are the total Hamiltonians that drive the evolution of the qubit states. On a neutral atom quantum computing platform, the native qubit Hamiltonian reads
(44)
where Hd is the drift Hamiltonian, which represents “always-on” interactions in the system, and H c [ Z ( t ) ] is the control Hamiltonian, parameterized by the controls Z(t) we impose on our system. For a linear array of N qubits and L controls, we find the van der Waals interaction
(45)
and the control Hamiltonian
(46)
Here, C6 characterizes the Rydberg–Rydberg interaction strength, R is the interatomic distance between neighboring qubits, and Ql are control operators associated with our control parameters. In this work, we control the Rabi frequency and laser phase | Ω ( t ) | e i φ ( t ), and the detuning Δ of the | 1 -state, so that their controls are given by
(47)
These controls parameterize both circuits by modeling pulses as a set of T / δ t time-discretized control pulses with duration δt. The physical implementation is a result from smoothing out the piecewise constant equidistant pulses at a minor loss of fidelity. We aim to optimize a new loss function,
(48)
by adjusting (29) to penalize strong controls for λ > 0. We enforce these constraints by adding time-dependent Lagrange multiplier terms to L QOC,
(49)
(50)
where the multipliers are called adjoints, the inner product is given by
(51)
and Ω N = L 2 ( [ 0 , T ] , H N ). To ensure a realistic implementation, we restrict our controls to the space Z ad of admissible pulses,
(52)
In  Appendix D, we prove that it is possible to optimize this Lagrangian under these constraints by sampling the necessary quantities on a quantum computer, where we have omitted the proof of existence and uniqueness of the solutions. Our update scheme at iteration k becomes
(53)
for the U-controls, and idem ditto for V. For optimization, we adopt the same Armijo optimizer protocol where we check for the first Wolfe condition at every step. In  Appendix D, we also explicitly solve for the adjoints. In conclusion, the VQOC paradigm requires to solve the propagator Schrödinger equations (42) and (43), the adjoint equations (D14) and (D15), and the analytical gradient (53).

To benchmark the performance of QSSD, several simulations have been run, using a classical emulator of a noise-free quantum circuit. For efficiency purposes we assume full access to all relevant matrix elements, without resorting to sampling amplitudes through measurements. We compare gate-based and pulse-based results to the retrieved classical SSD components for a variety of applications, though we stress that because of the inherent inefficiency of the algorithm, we have not attempted to make a fair comparison between the two implementations. In order to draw a fair contrast, either implementation needs to be run on equivalent evolution circuits, where both the gate-based and pulse-based have equivalent resources. This alludes to a similar circuit evolution time T, equivalent controls, such as Bloch sphere rotational control only, and the number of quantum evaluations # QEVQOC should scale linearly in the quantum resources compared to the number of quantum evaluations # QEVQE.

In this work, we adopted the hardware efficient ansatz for the gate-based circuits, with depth D = 10 and an iteration number it = 1000, and we assumed full control for the pulse-based circuits, in natural units where = 1 and C 6 = 1, for a total evolution time T = 10, a step size discretization T / δ t = 100 and an iteration number it = 150. We aim to show that with general resources, one can approximate the function space of the SVD reasonably well on a quantum computer, without employing equivalent evolution circuits. Note that the VQOC parameters are not fine-tuned for any implementation in particular. For a more realistic implementation on a neutral atom quantum computer, the parameters should lie in experimental bounds, and the space Z ad should be adjusted accordingly.

We first apply QSSD to a simple toy example to showcase that properties of the SSD algorithm, such as pinpointing non-stationarity of a signal, carry over to the quantum algorithm. Then we apply QSSD to simulated cortical local field potentials (LFPs), electric potentials originating from the extracellular space in brain tissue, as well as GW150914 data, resulting from the first observation of a gravitational wave event. As a figure of merit, we introduce the error measure
(54)
where p ( ω ) is the PSD of the simulated individual band components, and where p SSD ( ω ) is the PSD of the SSD-recovered component. Λ ω is the relevant frequency band of the PSD over which we integrate, and | Λ ω | is its measure. This error focuses more closely on the frequency content of the retrieved components than an l2-norm would and is not cumulative over time due to small displacements.
As a first proof of principle, we applied SSD to the following non-stationary toy signal, composed of two harmonic sinusoidal functions,
(55)
where θ ( · ) is the Heaviside function, for t [ 0 , 1 ]. The classical SSD and QSSD results are presented and compared, in Fig. 3. The figure shows a clear separation of harmonic functions, while also pinpointing the moment when the non-stationary harmonic function begins. In Fig. 6, the error ϵ is graphed for both components, for all SSD variations.
Fig. 3.

An example of a non-stationary function s(t) comprised of two sinusoidal components, in arbitrary units (a.u.), for t [ 0 , 1 ]. The two original components are shown in (a), and the SSD components are found using classical SSD (b), gate-based QSSD (c) and pulse-based QSSD (d). A sampling frequency FS = 256 Hz was chosen. The non-stationary character of the function is picked up upon by the algorithm in all variations of SSD.

Fig. 3.

An example of a non-stationary function s(t) comprised of two sinusoidal components, in arbitrary units (a.u.), for t [ 0 , 1 ]. The two original components are shown in (a), and the SSD components are found using classical SSD (b), gate-based QSSD (c) and pulse-based QSSD (d). A sampling frequency FS = 256 Hz was chosen. The non-stationary character of the function is picked up upon by the algorithm in all variations of SSD.

Close modal

Next, we applied SSD to simulated local field potentials (LFPs), which represent the relatively slow varying temporal components of the neural signal, picked up from within a few hundreds of microns of a recording electrode. Brain waves can be separated into separate characteristic frequency bands called alpha [8–12 Hz], beta [13–30 Hz], gamma [30–90 Hz], and theta [4–7 Hz] bands, corresponding to the frequency range in which their dominant frequency lies.1 We simulated these LFPs by superimposing four components, each in every aforementioned band.24 Because the alpha and theta components have overlapping frequency contents, we grouped them together as SSD cannot distinguish them. In Fig. 4, we show that SSD is capable of unentangling the superimposed signal into its characteristic components within reasonable accuracy bounds. The corresponding errors are graphed in Fig. 6 for all SSD variations.

Fig. 4.

(a) The simulated LFP data, where the potential field strength (in arbitrary units) is plotted as a function of time for t [ 0 , 1 ], at a sampling frequency of FS = 256 Hz. The signal consists of four components, where the alpha and theta components are grouped together because their frequency ranges show strong overlap. The beta, gamma, and alpha–theta components are shown in (b)–(d). Every column is color coded to represent a different frequency component: beta (blue), gamma (red) and alpha–theta (green). The SSD recovered components are shown for classical SSD (e)–(g), gate-based QSSD (h)–(j) and pulse-based QSSD (k)–(m). The field strength units are arbitrary in (b)–(m), and time runs from t = 0 to t = 1 s, omitted for clarity.

Fig. 4.

(a) The simulated LFP data, where the potential field strength (in arbitrary units) is plotted as a function of time for t [ 0 , 1 ], at a sampling frequency of FS = 256 Hz. The signal consists of four components, where the alpha and theta components are grouped together because their frequency ranges show strong overlap. The beta, gamma, and alpha–theta components are shown in (b)–(d). Every column is color coded to represent a different frequency component: beta (blue), gamma (red) and alpha–theta (green). The SSD recovered components are shown for classical SSD (e)–(g), gate-based QSSD (h)–(j) and pulse-based QSSD (k)–(m). The field strength units are arbitrary in (b)–(m), and time runs from t = 0 to t = 1 s, omitted for clarity.

Close modal

Black holes and gravitational waves are characteristic predictions of general relativity, and are therefore favorable probes of the theory. The waveforms of such gravitational waves are constrained by the underlying mechanisms, such as their generation and propagation. The analysis of such waves is crucial to the understanding of relativistic gravitational physics and may provide useful insights into black hole physics, the structure of neutron stars, supernovae and the cosmic gravitational wave background. Gravitational wave analysis will likely become classically intractable in the age of Big Data and third generation detectors such as the Einstein Telescope,25 and can greatly benefit from quantum data analysis algorithms. We applied SSD to gravitational wave analysis since it serves as an interesting addition to the classical gravitational wave data analysis pipeline. Not only is it capable of providing waveforms with a more crisp quality, it is also able to filter out glitches, lowering the risk of a wrong signal-to-noise ratio (SNR) threshold exceedance.

To showcase the applicability of SSD, we applied it to the first measured gravitational wave event GW150914, eminent from a black hole binary merger, consistent of a pair of black holes with inferred source masses M 1 = 29.1 4.4 + 3.7 M and M 2 = 36.2 3.8 + 5.2 M .26,27 We have taken the raw data as measured by the LIGO Hanford detector around the merger time, after which we applied noise whitening, band-passing and notching to obtain an approximate gravitational waveform.28 Through SSD, we hope to separate the relevant information about the binary merger from any other unwanted component or noise, thus obtaining a subset of SSD components that can provide a more crisp waveform. The results of SSD filtering are presented in Fig. 5, as well as their respective time–frequency spectrograms. Since we are analyzing a real signal rather than a simulated one, we cannot fairly compare the quality of a components to any original benchmark, because we do not know a priori what the true gravitational wave signal should look like, given the right physical parameters. For this reason, we can only provide a figure of merit of the QSSD components with respect to the classical SSD components. The errors are given in Fig. 6, where the comparison with classical SSD is missing for GW150914 because of the aforementioned argument.

Fig. 5.

(a) The GW150914 event gravitational wave strain amplitude, measured at the LIGO Hanford Observatory, after noise whitening, band-passing and notching (left), as described in Ref. 28, presented with its corresponding time–frequency spectrogram (right). Time is relative to the event time, on September 14, 2015 at 09:50:45 UTC. (c)–(g) The extracted gravitational wave components (left column) with their respective spectrograms (right column), obtained through classical SSD (c) and (d), gate-based QSSD (e) and (f), and pulse-based QSSD (g) and (h). For SSD calculations, the detector sampling frequency FS = 4096 Hz was chosen. The spectrograms were obtained by shifting Blackman windows of size 256 with a mutual overlap of 240 points. Results show that through SSD, we can obtain a signal with a higher quality chirp. The spectrogram amplitude is normalized with respect to the strongest amplitude present.

Fig. 5.

(a) The GW150914 event gravitational wave strain amplitude, measured at the LIGO Hanford Observatory, after noise whitening, band-passing and notching (left), as described in Ref. 28, presented with its corresponding time–frequency spectrogram (right). Time is relative to the event time, on September 14, 2015 at 09:50:45 UTC. (c)–(g) The extracted gravitational wave components (left column) with their respective spectrograms (right column), obtained through classical SSD (c) and (d), gate-based QSSD (e) and (f), and pulse-based QSSD (g) and (h). For SSD calculations, the detector sampling frequency FS = 4096 Hz was chosen. The spectrograms were obtained by shifting Blackman windows of size 256 with a mutual overlap of 240 points. Results show that through SSD, we can obtain a signal with a higher quality chirp. The spectrogram amplitude is normalized with respect to the strongest amplitude present.

Close modal
Fig. 6.

The error ϵ, given by Eq. (54) of the retrieved SSD components for all application examples. The error of the components of the non-stationary signal and the LFPs are with respect to the simulated signal. For GW150914, the error is taken with respect to the classical SSD components. Results are shown for classical SSD (light green), gate-based QSSD (green) and pulse-based QSSD (dark green). For general problem-agnostic quantum circuit resources, gate-based QSSD seems to provide more accurate signals than pulse-based QSSD, when the component closely resembles a simple harmonic function.

Fig. 6.

The error ϵ, given by Eq. (54) of the retrieved SSD components for all application examples. The error of the components of the non-stationary signal and the LFPs are with respect to the simulated signal. For GW150914, the error is taken with respect to the classical SSD components. Results are shown for classical SSD (light green), gate-based QSSD (green) and pulse-based QSSD (dark green). For general problem-agnostic quantum circuit resources, gate-based QSSD seems to provide more accurate signals than pulse-based QSSD, when the component closely resembles a simple harmonic function.

Close modal

In this work, we have established a unified framework that combines the promising method of singular spectrum decomposition for classical time series analysis with the variational quantum singular value decomposition algorithm into quantum singular spectrum decomposition (QSSD). In the NISQ era, variational algorithms are favorable over other types, because the quantum constituent is more robust against errors. The singular value decomposition subroutine is designed to be performed on quantum hardware, while the retrieval of the SSD components is done classically. Using analytical gradient optimizers, we circumvent finite difference errors, and we guarantee optimization at every iteration. SSD enhances protection against errors accumulating in the quantum algorithm. QSSD could find interesting applications to near-term gravitational wave analysis because of the variational formulation. Here, we demonstrated this algorithm for providing a more crisp waveform for matched filtering.

In contrast to the classical algorithm, the quantum formulation does not have efficient access to all matrix elements of U and V. Unpacking the dense trajectory matrix into its unitary basis elements and reading out all the columns of U and V are two processes that require the performance of an exponential number of quantum circuit runs, in the number of qubits. An efficient quantum algorithm would rely on sampling few parameters that are readily measured on a quantum device. To mitigate the effects of the exponential scaling laws, we have adopted randomized SVD. While rendering QSSD more economic, the same trick can be applied to the classical algorithm, providing no inherent quantum speedup. An efficient rendition of SSD would circumvent measuring | u i and | v i , and instead use a different pipeline to read out SSD components. For further research, we pose the question whether this is possible within this variational framework.

The quality of SSD is related to the quality of the singular vectors, not the singular values, as dictated by Eq. (8), since a rescaling is performed after every iteration to ensure optimal energy extraction. It is, therefore, crucial for the convergence of the quantum routine to select a circuit ansatz that can approximate the solution space well. For the gate-based approach, we have adopted the hardware efficient ansatz, and for the pulse-based approach, we have assumed full control. Quantum computing has the advantage of computing within an exponentially large Hilbert space, but the variational approach of NISQ-type algorithms gives rise to weak convergence in the absence of an effective model that leads to the required target states. Despite the HEA being often invoked as a standard method for quantum state preparation, there is a lack of understanding of its capabilities.29 Since data analysis do not know an effective design or model, we pose the question if an effective model can be established for specific time series applications.

In conclusion, it remains an open problem whether a true quantum speedup can be achieved through the method of QSSD. The necessity to use dense classical matrices in a quantum algorithm does not allow for polynomial scaling laws in the number of qubits, and requires measuring exponentially many quantum amplitudes. Other methods have been proposed that claim a quantum speedup for singular value related algorithms.13,14 The application of either of these algorithms would not circumvent the core issue of having to sample exponentially many quantum amplitudes, since they provide singular vectors as quantum states that must be read out. In general, this issue affects all quantum algorithms that require the readout of a dense matrix.

At the same time, as mentioned above, several aspects of the theory can be improved upon, which are not necessarily restricted to QSSD applications only. For quantum state preparation purposes, it is interesting to investigate if one could apply quantum natural gradient (QNG) theory to the optimizer routine to improve convergence,30 or quantify the geometry of the search space to find better initial states.29 Overall, the results from this preliminary attempt to translate SSD to a quantum computing framework show that this should be likely achievable with proper improvements in the theory and methods employed. In turn, this may pave the way to a true quantum speedup of QSSD over classical SSD. We leave the adaptations of all these ideas to future studies of QSSD and alternative approaches to time series analysis on a quantum computer.

The authors thank Robert de Keijzer, Madhav Mohan, and Menica Dibenedetto for discussions. This research is financially supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL programme, and by the Netherlands Organisation for Scientific Research (NWO) under Grant No. 680.92.18.05.

The authors have no conflicts to disclose.

Jasper Postema: Formal analysis (equal); Methodology (lead); Software (equal); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). Pietro Bonizzi: Formal analysis (equal); Software (equal); Supervision (equal); Writing – review & editing (equal). Gideon Koekoek: Supervision (equal); Writing – review & editing (equal). Ronald Leonard Westra: Supervision (equal); Writing  – review & editing (equal). Servaas Kokkelmans: Funding acquisition (lead); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon request. The GW150914 gravitational wave event data that support the findings of this study are openly available in GWOSC at https://www.gw-openscience.org/events/GW150914/, Ref. 27. Simulations were performed using QuTiP.31 

Suppose a time series is sampled N times at a steady rate FS to obtain the string x ( n ) = { x 1 , x 2 , , x N }. Then, the SSA algorithm works as follows:

  1. The time series is turned into an M × K Hankel matrix, where M is the embedding dimension and K = N + 1 M. This gives the trajectory matrix
    (A1)

    Such a matrix encapsulates correlations between the data points. By tuning the embedding dimension just right, SSA can faithfully extract sub-components. If M is too low, not enough correlation can be captured through the trajectory matrix. If M is too large, however, ghost oscillations called spurious components will be captured.

  2. An SVD is performed on the trajectory matrix, yielding X = U Σ V , with all relevant notation defined in Eq. (11). One can further decompose a rectangular matrix X of rank n into a sum of rank-1 matrices according to
    (A2)

    The set { σ i , u i , v i } is referred to as the ith eigentriple for fixed i.

  3. The index set { 1 , , r } is partitioned into r r disjoint subsets { I 1 , , I r }. Each element Ii contains a set of principal components that capture a specific component in a time series (like trend, oscillations, noise, etc.) The original trajectory matrix is then decomposed into
    (A3)

    where each matrix on the right-hand side represents a specific component series g c ( n ).

  4. Sub-components are reconstructed by Hankelization of each X I i, where the matrix components of each cross-diagonal are averaged over. Reverse engineering those Hankel matrices gives the embedded sub-signals g c ( n ).

In this Appendix, we provide a protocol for the most economic extension of a pseudo-unitary matrix to a square unitary matrix that can be readily prepared for a Hadamard test. Let
(B1)
be a 2 q m × 2 q n pseudo-unitary, with qn > qm. The location of the P operator inside the string, starting from 0 to 2 q n q m 1 can be turned into a binary string i 0 i 1 i q n q m 2 i q n q m 1. Using the definition of the Pauli operators (16), we can find the continuation of the pseudo-unitaries as
(B2)
for σ 0 = I , σ 1 = σ x. Such qn-qubit operators are readily prepared on a quantum computer, while preserving the structure of the original pseudo-unitary. Such a protocol works equally well for transpose structures. Then, the evaluated singular values yield
(B3)
Because the constituent gates of the quantum circuits are generated by Hermitian generators, analytical gradients are readily calculated. The loss function is given by
(C1)
and both circuits are parameterized by the hardware efficient ansatz
(C2)
Loss function gradients can, therefore, directly be related to derivatives of the circuit representation with respect to the variational parameters according to
(C3)
Because the entanglement operations are unparameterized, we can relate this derivative to a shift in the α μ parameter,
(C4)
In a similar spirit, we find
(C5)
This completes the proof.
In this Appendix, we prove that QOC finds a suitable application in performing SVD on a quantum computer, by showing that the quantities we must measure for optimization can be prepared on a quantum computer. Additionally, we formally lay out some of the QOC theory laid out in Ref. 12 and tweak it to describe an SVD application. The Lagrangian L QOC for SVD applications is given by
(D1)
where U and V are the unitary representations of the pulses on circuit U and V, respectively, ZU and ZV represent the controls over the circuits, and ηU and ηV are the adjoints. The first term represents the loss function we want to minimize regardless of optimal control,
(D2)
where we defined the artificial density matrix
(D3)
Furthermore, we include the regularization of the pulse norms
(D4)
for some regularization parameter λ, which we will fix at λ = 10 4 . We also add two Lagrange multiplier terms that makes sure that the unitaries that are generated by the pulses are constrained to satisfy the Schrödinger equation. These are given by
(D5)
(D6)
where H U / V ( t ) = H d + l = 1 L ( Z l , U / V ( t ) Q l + Z l , U / V ( t ) ¯ Q l ) denotes the Hamiltonian that drives the circuit U and V, respectively, and where the inner product · L 2 ( [ 0 , T ] ; 2 m × 2 m ) is given by
(D7)
First we show how adjoints are calculated. This follows from the Fréchet derivates / S of the Lagrangian with respect to the quantity S, be it the circuit unitary representation, the pulses or the adjoints. If we vary a Lagrangian J with respect to a certain quantity S with ϵ δ S with | ϵ | 1, and if we obtain
(D8)
then f(S) is said to be the Fréchet derivative. In our next calculation, we omit ϵ by making it implicit in the variation, and we sloppily write J / S as f ( S ) , δ S L 2 ( [ 0 , T ] ; 2 m × 2 m ). We find
(D9)
(D10)
(D11)
(D12)
Stationary of the action under the Lagrangian L QOC requires all its Fréchet derivatives to vanish. This gives us
(D13)
where we have used the property e { Tr [ A B ] } = e { Tr [ B A ] }. This gives us the propagator equation for the adjoint
(D14)
In a similar fashion, we find
(D15)
The derivatives with respect to the pulse controls yield
(D16)
(D17)
If QOC is to implemented for a quantum SVD application, we need to be able to measure quantities Q defined by
(D18)
on a quantum computer. Defining ϒ V , l = V ( t ) Q l V ( t ), we find that we need to measure quantities of the form
(D19)
which we can sample through Hadamard tests. A similar calculation for the other circuit yields a similar conclusion. Therefore, a QOC implementation of quantum SVD is possible. With this implementation, we can analytically calculate gradients too, to find the update for the pulses at iteration k,
(D20)
We find that
(D21)
(D22)
Because of the dependence of U-adjoints on V(t) and vice versa, this allows for a coupled gradient calculation on a quantum computer for SVD applications.
1.
P.
Bonizzi
,
J. M.
Karel
,
O.
Meste
, and
R. L.
Peeters
,
Adv. Adapt. Data Anal.
06
,
1450011
(
2014
).
2.
P.
Bonizzi
,
J.
Karel
,
S.
Zeemering
, and
R.
Peeters
, in
Computing in Cardiology Conference (CinC)
(IEEE, Piscataway, NJ,
2015
), pp.
309
312
.
3.
E.
Lowet
,
M. J.
Roberts
,
A.
Peter
,
B.
Gips
, and
P.
De Weerd
,
eLife
6
,
e26642
(
2017
).
4.
E.
Lowet
,
M. J.
Roberts
,
P.
Bonizzi
,
J.
Karel
, and
P.
De Weerd
,
PLoS One
11
,
e0146443
(
2016
).
5.
A. K.
Cline
and
I. S.
Dhillon
,
Handbook of Linear Algebra
(
CRC Press
,
2006
).
7.
A.
Kandala
,
K.
Temme
,
A. D.
Corcoles
,
A.
Mezzacapo
,
J. M.
Chow
, and
J. M.
Gambetta
, arXiv:1805.04492 (
2018
).
8.
P. J. J.
O'Malley
,
R.
Babbush
,
I. D.
Kivlichan
,
J.
Romero
,
J. R.
McClean
,
R.
Barends
,
J.
Kelly
,
P.
Roushan
,
A.
Tranter
et al,
Phys. Rev. X
6
,
031007
(
2016
).
9.
A.
Peruzzo
,
J.
McClean
,
P.
Shadbolt
,
M.-H.
Yung
,
X.-Q.
Zhou
,
P. J.
Love
,
A.
Aspuru-Guzik
, and
J. L.
O'brien
,
Nat. Commun.
5
(
1
),
4213
(
2014
).
10.
J. I.
Colless
,
V. V.
Ramasesh
,
D.
Dahlen
,
M. S.
Blok
,
M. E.
Kimchi-Schwartz
,
J. R.
McClean
,
J.
Carter
,
W. A.
de Jong
, and
I.
Siddiqi
,
Phys. Rev. X
8
,
011021
(
2018
).
11.
X.
Wang
,
Z.
Song
, and
Y.
Wang
,
Quantum
5
,
483
(
2021
).
12.
R.
de Keijzer
,
O.
Tse
, and
S.
Kokkelmans
,
Quantum
7
,
908
(
2023
).
13.
P.
Rebentrost
,
A.
Steffens
,
I.
Marvian
, and
S.
Lloyd
,
Phys. Rev. A
97
,
012327
(
2018
).
14.
A.
Gilyén
,
Y.
Su
,
G. H.
Low
, and
N.
Wiebe
, in
Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing
(University of Amsterdam,
2019
), pp.
193
204
.
15.
N.
Halko
,
P.-G.
Martinsson
, and
J. A.
Tropp
,
SIAM Rev.
53
,
217
(
2011
).
16.
R.
Vautard
and
M.
Ghil
,
Physica D
35
,
395
(
1989
).
17.
R.
Vautard
,
P.
Yiou
, and
M.
Ghil
,
Physica D
58
,
95
(
1992
).
18.
C.
Eckart
and
G.
Young
,
Psychometrika
1
,
211
(
1936
).
19.
N. B.
Erichson
,
S.
Voronin
,
S. L.
Brunton
, and
J. N.
Kutz
, arXiv:1608.02148 (
2016
).
20.
K.
Fan
,
Proc. Natl. Acad. Sci. U. S. A.
37
,
760
(
1951
).
21.
M.
Schuld
,
V.
Bergholm
,
C.
Gogolin
,
J.
Izaac
, and
N.
Killoran
,
Phys. Rev. A
99
,
032331
(
2019
).
22.
C. P.
Koch
,
U.
Boscain
,
T.
Calarco
,
G.
Dirr
,
S.
Filipp
,
S. J.
Glaser
,
R.
Kosloff
,
S.
Montangero
,
T.
Schulte-Herbrüggen
et al, arXiv:2205.12110 (
2022
).
23.
T.
Hubregtsen
,
J.
Pichlmeier
,
P.
Stecher
, and
K.
Bertels
,
Quantum Mach. Intell.
3
,
9
(
2021
).
24.
H.
Liang
,
S. L.
Bressler
,
E. A.
Buffalo
,
R.
Desimone
, and
P.
Fries
,
Biol. Cybern.
92
,
380
(
2005
).
25.
M.
Punturo
,
M.
Abernathy
,
F.
Acernese
,
B.
Allen
,
N.
Andersson
,
K.
Arun
,
F.
Barone
,
B.
Barr
,
M.
Barsuglia
et al,
Classical Quantum Gravity
27
,
194002
(
2010
).
26.
27.
LIGO-Virgo collaboration
(
2019
). “Data release for event GW150914,”
GWOSC
. https://www.gw-openscience.org/events/GW150914/
28.
GWOS Center
, see https://www.gw-openscience.org/s/events/GW150914/GW150914_tutorial.html for “
Signal processing with GW150914 open data
” (
2017
).
29.
A.
Katabarwa
,
S.
Sim
,
D. E.
Koh
, and
P.-L.
Dallaire-Demers
,
Quantum
6
,
782
(
2022
).
30.
J.
Stokes
,
J.
Izaac
,
N.
Killoran
, and
G.
Carleo
,
Quantum
4
,
269
(
2020
).
31.
J. R.
Johansson
,
P. D.
Nation
, and
F.
Nori
,
Comput. Phys. Commun.
183
,
1760
(
2012
).