Classical data analysis requires computational efforts that become intractable in the age of Big Data. An essential task in time series analysis is the extraction of physically meaningful information from a noisy time series. One algorithm devised for this very purpose is singular spectrum decomposition (SSD), an adaptive method that allows for the extraction of narrowbanded components from nonstationary and nonlinear time series. The main computational bottleneck of this algorithm is the singular value decomposition (SVD). Quantum computing could facilitate a speedup in this domain through superior scaling laws. We propose quantum SSD by assigning the SVD subroutine to a quantum computer. The viability for implementation and performance of this hybrid algorithm on a near term hybrid quantum computer is investigated. In this work, we show that by employing randomized SVD, we can impose a qubit limit on one of the circuits to improve scalibility. Using this, we efficiently perform quantum SSD on simulations of local field potentials recorded in brain tissue, as well as GW150914, the first detected gravitational wave event.
I. INTRODUCTION
A recently proposed approach to classical time series analysis is singular spectrum decomposition (SSD).^{1} SSD aims to decompose nonlinear and nonstationary time series into physically meaningful components. In stark contrast to a Fourier decomposition which employs a basis set of harmonic functions, SSD prepares an adaptive basis set of functions, whose elements depend uniquely on the time series. Applications include, but are not limited to, sleep apnea detection from an unprocessed electrocardiogram (ECG)^{2} and the processing and analysis of brain waves.^{3,4} In this work, we will also cover application to gravitational wave analysis. The latter is interesting as realtime data analysis poses a challenge, and will be vital for next generations of gravitational wave detection technology. The main bottleneck for scaling this algorithm up for larger time series is the singular value decomposition (SVD) that is performed in each subroutine. For general m × n matrices, the classical SVD algorithm yields a computational complexity of $ O ( m n min ( m , n ) )$.^{5}
Certain matrix operations are more efficiently performed on a quantum computer, raising the question whether a quantum speedup can be acquired for SVD applications. Currently, quantum computing is in the Noisy Intermediate Scale Quantum (NISQ) era.^{6} While some forms of error mitigation are proposed, noise still limits the amount of qubits and the fidelity of gate operations, such that their numbers are too little for fullfledged quantum computers running quantum error correction codes.^{7} Despite these shortcomings, hybrid quantum computers running variational quantum algorithms provide a useful platform for classically taxing calculations, one major example being the variational quantum eigensolver (VQE) for quantum chemistry.^{8–10} Hybrid algorithms complement the strengths of both processors: classically efficient tasks are performed on a classical computer, while unitary calculations such as quantum state preparation are handed to a quantum computer. Because of the iterative nature of variational algorithms, noise can be mitigated at every step, increasing robustness of the algorithm.
The SVD subroutine can be performed on a quantum computer through the variational quantum singular value decomposition (VQSVD) algorithm.^{11} In addition to its application to SSD, SVD forms a compelling application of hybrid quantum computing in general, because of its versatile utility. In a similar fashion to VQE, one can train two parameterizable quantum circuits (PQCs) to learn the left and right singular vectors of a matrix, and sample the singular values through Hadamard tests. In this work, we explore both a gatebased paradigm,^{9} where the parameterization is facilitated through rotational quantum gates, and a pulsebased paradigm,^{12} where we optimize laser pulse profiles on a neutral atom quantum computing platform. The latter is derived from quantum optimal control (QOC) theory, and is thought to be more NISQ efficient. Other quantum algorithms for SVD have been proposed,^{13,14} but require faulttolerant resources such as quantum random access memory or oracles.
Our aim is to convert SSD into a variational hybrid quantumclassical algorithm called quantum singular spectrum decomposition (QSSD). Here, we pose the question whether quantum computing can provide an inherent computational speedup for SVD by surpassing the classical complexity. We employ randomized SVD to improve scalability of the algorithm.^{15} The classical theory behind SSD is introduced in Sec. II. The implementation of QSSD on a hybrid quantum computer is elaborated on in Sec. III, using a gatebased approach, while in Sec. IV we discuss a pulsebased approach. Simulation results for three different time series, among which the first observation of a gravitational wave, are presented in Sec. V. We conclude with a summary and discussion about the viability of QSSD in Sec. VI.
II. SINGULAR SPECTRUM DECOMPOSITION
The SSD algorithm is based on the singular spectrum analysis (SSA) method, whose basic ideas are presented in Appendix A.^{1,16} SSA is designed to explore the presence of specific patterns within a time series, while SSD is designed to provide a decomposition into its constituents components. Let $ x ( n ) = { x 1 , x 2 , \u2026 , x N \u2212 1 , x N}$ be an Ndimensional string of numbers, sampled at a uniform sampling frequency F_{S} from some time series. We are interested in extracting (physically) meaningful components from the signal. By embedding the signal in a matrix, correlation functions between pairs of sample points can be extracted through a singular value decomposition of said matrix. Several types of components can be found, among which trends, oscillations, and noise for instance. In SSD, such identification is completely automated by focusing on extracting narrow frequency band components.
For iteration j = 1:
The power spectral density (PSD) of the signal is calculated. The frequency $ f max$ associated with the dominant peak in the PSD is then estimated. If the ratio criterion $ ( f max / F S ) < \delta $ is met, a trend is said to characterize the spectral contents of the residual. Reference 1 sets $ \delta = 10 \u2212 3$. In such a case, the embedding dimension is set to $ M = \u230a N / 3 \u230b$, as described in Ref. 17. The first component $ g ( 1 ) ( n )$ is then extracted from Hankelization of the matrix $ X 1 = \sigma 1 u 1 v 1 \u22a4$. Since most of the energy must be contained within the trend component, only 1 singular value is required for its reconstruction. If the ratio criterion is not met, the window length is defined by $ M = \u230a 1.2 ( F S / f max ) \u230b$.^{1}
For iterations $ j \u2265 2$:
III. HYBRID GATEBASED SINGULAR VALUE DECOMPOSITION
A. SVD quantum algorithm
B. Trajectory matrix implementation
Again, it is to be understood that every index is mod N. While not obeying optimal SSD dimensions, this definition makes the most use of the space constrained by the dimensionality restrictions. Additionally, this definition reduces the number of unitaries U_{i} required for the LCU in Eq. (15), compared to embedding the trajectory matrix in a large null matrix.
C. Randomized QSSD
We can establish a natural qubit cutoff number that defines a lowdimensional space onto which our trajectory matrix is projected. Because we focus on narrow frequency bands in the PSD, we can predict that we do not need plenty of singular values to reconstruct a signal. Every dominant frequency component yields 2 singular values, and simulations have shown that to obtain a faithful reconstruction of the corresponding signal, we need at most a number of singular values close to $ \u223c 10 \u2212 12$. Therefore, we establish a dimensional cutoff $ k \u2264 k \Lambda = 16$ with associated qubit cutoff q = 4. Henceforth, we shall denote the reduced trajectory matrix as X.
D. The parameterizable quantum circuits
E. The loss function
F. Pseudounitary Hadamard tests
G. Analytical gradient optimizer
H. Orthonormal matrix reconstruction
IV. PULSEBASED IMPLEMENTATION FOR QSSD
In addition to the gate paradigm, variational quantum optimal control (VQOC)^{12} offers an alternative approach to searching through a Hilbert space $H$ (see Ref. 22 for a review on QOC in general). Rather than optimizing parameterized gates, we directly optimize the physical controls through which quantum operations are implemented. It is believed that for highly entangled systems, VQOC is able to outperform VQE in terms of accuracy, when resources are compatible. This is a result of the higher expressibility of optimal control, which one can prove by showing that a set of randomly initialized pulses produces unitary operations that resemble the Haar distribution more closely than a set of randomly initialized gates in the hardware efficient ansatz.^{23}
V. ANALYSES AND RESULTS
To benchmark the performance of QSSD, several simulations have been run, using a classical emulator of a noisefree quantum circuit. For efficiency purposes we assume full access to all relevant matrix elements, without resorting to sampling amplitudes through measurements. We compare gatebased and pulsebased results to the retrieved classical SSD components for a variety of applications, though we stress that because of the inherent inefficiency of the algorithm, we have not attempted to make a fair comparison between the two implementations. In order to draw a fair contrast, either implementation needs to be run on equivalent evolution circuits, where both the gatebased and pulsebased have equivalent resources. This alludes to a similar circuit evolution time T, equivalent controls, such as Bloch sphere rotational control only, and the number of quantum evaluations $#$ QE_{VQOC} should scale linearly in the quantum resources compared to the number of quantum evaluations $#$ QE_{VQE}.
In this work, we adopted the hardware efficient ansatz for the gatebased circuits, with depth D = 10 and an iteration number it = 1000, and we assumed full control for the pulsebased circuits, in natural units where $ \u210f = 1$ and $ C 6 = 1$, for a total evolution time T = 10, a step size discretization $ T / \delta t = 100$ and an iteration number it = 150. We aim to show that with general resources, one can approximate the function space of the SVD reasonably well on a quantum computer, without employing equivalent evolution circuits. Note that the VQOC parameters are not finetuned for any implementation in particular. For a more realistic implementation on a neutral atom quantum computer, the parameters should lie in experimental bounds, and the space $ Z ad$ should be adjusted accordingly.
A. Nonstationary signal
B. Local field potential signal
Next, we applied SSD to simulated local field potentials (LFPs), which represent the relatively slow varying temporal components of the neural signal, picked up from within a few hundreds of microns of a recording electrode. Brain waves can be separated into separate characteristic frequency bands called alpha [8–12 Hz], beta [13–30 Hz], gamma [30–90 Hz], and theta [4–7 Hz] bands, corresponding to the frequency range in which their dominant frequency lies.^{1} We simulated these LFPs by superimposing four components, each in every aforementioned band.^{24} Because the alpha and theta components have overlapping frequency contents, we grouped them together as SSD cannot distinguish them. In Fig. 4, we show that SSD is capable of unentangling the superimposed signal into its characteristic components within reasonable accuracy bounds. The corresponding errors are graphed in Fig. 6 for all SSD variations.
C. Gravitational wave event GW150914
Black holes and gravitational waves are characteristic predictions of general relativity, and are therefore favorable probes of the theory. The waveforms of such gravitational waves are constrained by the underlying mechanisms, such as their generation and propagation. The analysis of such waves is crucial to the understanding of relativistic gravitational physics and may provide useful insights into black hole physics, the structure of neutron stars, supernovae and the cosmic gravitational wave background. Gravitational wave analysis will likely become classically intractable in the age of Big Data and third generation detectors such as the Einstein Telescope,^{25} and can greatly benefit from quantum data analysis algorithms. We applied SSD to gravitational wave analysis since it serves as an interesting addition to the classical gravitational wave data analysis pipeline. Not only is it capable of providing waveforms with a more crisp quality, it is also able to filter out glitches, lowering the risk of a wrong signaltonoise ratio (SNR) threshold exceedance.
To showcase the applicability of SSD, we applied it to the first measured gravitational wave event GW150914, eminent from a black hole binary merger, consistent of a pair of black holes with inferred source masses $ M 1 = 29.1 \u2212 4.4 + 3.7 M \u2299$ and $ M 2 = 36.2 \u2212 3.8 + 5.2 M \u2299$.^{26,27} We have taken the raw data as measured by the LIGO Hanford detector around the merger time, after which we applied noise whitening, bandpassing and notching to obtain an approximate gravitational waveform.^{28} Through SSD, we hope to separate the relevant information about the binary merger from any other unwanted component or noise, thus obtaining a subset of SSD components that can provide a more crisp waveform. The results of SSD filtering are presented in Fig. 5, as well as their respective time–frequency spectrograms. Since we are analyzing a real signal rather than a simulated one, we cannot fairly compare the quality of a components to any original benchmark, because we do not know a priori what the true gravitational wave signal should look like, given the right physical parameters. For this reason, we can only provide a figure of merit of the QSSD components with respect to the classical SSD components. The errors are given in Fig. 6, where the comparison with classical SSD is missing for GW150914 because of the aforementioned argument.
VI. CONCLUSION AND DISCUSSION
In this work, we have established a unified framework that combines the promising method of singular spectrum decomposition for classical time series analysis with the variational quantum singular value decomposition algorithm into quantum singular spectrum decomposition (QSSD). In the NISQ era, variational algorithms are favorable over other types, because the quantum constituent is more robust against errors. The singular value decomposition subroutine is designed to be performed on quantum hardware, while the retrieval of the SSD components is done classically. Using analytical gradient optimizers, we circumvent finite difference errors, and we guarantee optimization at every iteration. SSD enhances protection against errors accumulating in the quantum algorithm. QSSD could find interesting applications to nearterm gravitational wave analysis because of the variational formulation. Here, we demonstrated this algorithm for providing a more crisp waveform for matched filtering.
In contrast to the classical algorithm, the quantum formulation does not have efficient access to all matrix elements of U and V. Unpacking the dense trajectory matrix into its unitary basis elements and reading out all the columns of U and V are two processes that require the performance of an exponential number of quantum circuit runs, in the number of qubits. An efficient quantum algorithm would rely on sampling few parameters that are readily measured on a quantum device. To mitigate the effects of the exponential scaling laws, we have adopted randomized SVD. While rendering QSSD more economic, the same trick can be applied to the classical algorithm, providing no inherent quantum speedup. An efficient rendition of SSD would circumvent measuring $  u i \u27e9$ and $  v i \u27e9$, and instead use a different pipeline to read out SSD components. For further research, we pose the question whether this is possible within this variational framework.
The quality of SSD is related to the quality of the singular vectors, not the singular values, as dictated by Eq. (8), since a rescaling is performed after every iteration to ensure optimal energy extraction. It is, therefore, crucial for the convergence of the quantum routine to select a circuit ansatz that can approximate the solution space well. For the gatebased approach, we have adopted the hardware efficient ansatz, and for the pulsebased approach, we have assumed full control. Quantum computing has the advantage of computing within an exponentially large Hilbert space, but the variational approach of NISQtype algorithms gives rise to weak convergence in the absence of an effective model that leads to the required target states. Despite the HEA being often invoked as a standard method for quantum state preparation, there is a lack of understanding of its capabilities.^{29} Since data analysis do not know an effective design or model, we pose the question if an effective model can be established for specific time series applications.
In conclusion, it remains an open problem whether a true quantum speedup can be achieved through the method of QSSD. The necessity to use dense classical matrices in a quantum algorithm does not allow for polynomial scaling laws in the number of qubits, and requires measuring exponentially many quantum amplitudes. Other methods have been proposed that claim a quantum speedup for singular value related algorithms.^{13,14} The application of either of these algorithms would not circumvent the core issue of having to sample exponentially many quantum amplitudes, since they provide singular vectors as quantum states that must be read out. In general, this issue affects all quantum algorithms that require the readout of a dense matrix.
At the same time, as mentioned above, several aspects of the theory can be improved upon, which are not necessarily restricted to QSSD applications only. For quantum state preparation purposes, it is interesting to investigate if one could apply quantum natural gradient (QNG) theory to the optimizer routine to improve convergence,^{30} or quantify the geometry of the search space to find better initial states.^{29} Overall, the results from this preliminary attempt to translate SSD to a quantum computing framework show that this should be likely achievable with proper improvements in the theory and methods employed. In turn, this may pave the way to a true quantum speedup of QSSD over classical SSD. We leave the adaptations of all these ideas to future studies of QSSD and alternative approaches to time series analysis on a quantum computer.
ACKNOWLEDGEMENTS
The authors thank Robert de Keijzer, Madhav Mohan, and Menica Dibenedetto for discussions. This research is financially supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL programme, and by the Netherlands Organisation for Scientific Research (NWO) under Grant No. 680.92.18.05.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Jasper Postema: Formal analysis (equal); Methodology (lead); Software (equal); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). Pietro Bonizzi: Formal analysis (equal); Software (equal); Supervision (equal); Writing – review & editing (equal). Gideon Koekoek: Supervision (equal); Writing – review & editing (equal). Ronald Leonard Westra: Supervision (equal); Writing – review & editing (equal). Servaas Kokkelmans: Funding acquisition (lead); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon request. The GW150914 gravitational wave event data that support the findings of this study are openly available in GWOSC at https://www.gwopenscience.org/events/GW150914/, Ref. 27. Simulations were performed using QuTiP.^{31}
APPENDIX A: PRELIMINARIES—SINGULAR SPECTRUM ANALYSIS
Suppose a time series is sampled N times at a steady rate F_{S} to obtain the string $ x ( n ) = { x 1 , x 2 , \u2026 , x N}$. Then, the SSA algorithm works as follows:
 The time series is turned into an M × K Hankel matrix, where M is the embedding dimension and $ K = N + 1 \u2212 M$. This gives the trajectory matrix$ X = [ x 1 x 2 \cdots x K x 2 x 3 \cdots x K + 1 \vdots \vdots \ddots \u2009 \vdots x M x M + 1 \cdots x N ] .$
Such a matrix encapsulates correlations between the data points. By tuning the embedding dimension just right, SSA can faithfully extract subcomponents. If M is too low, not enough correlation can be captured through the trajectory matrix. If M is too large, however, ghost oscillations called spurious components will be captured.
 An SVD is performed on the trajectory matrix, yielding $ X = U \Sigma V \u22a4$, with all relevant notation defined in Eq. (11). One can further decompose a rectangular matrix X of rank n into a sum of rank1 matrices according to$ X = \u2211 i = 1 r X i = def \u2211 i = 1 r \sigma i u i v i \u22a4 .$
The set $ { \sigma i , u i , v i}$ is referred to as the ith eigentriple for fixed i.
 The index set $ { 1 , \u2026 , r}$ is partitioned into $ r \u2264 r$ disjoint subsets $ { I 1 , \u2026 , I r}$. Each element I_{i} contains a set of principal components that capture a specific component in a time series (like trend, oscillations, noise, etc.) The original trajectory matrix is then decomposed into$ X = \u2211 i = 1 r X I i ,$
where each matrix on the righthand side represents a specific component series $ g c ( n )$.

Subcomponents are reconstructed by Hankelization of each $ X I i$, where the matrix components of each crossdiagonal are averaged over. Reverse engineering those Hankel matrices gives the embedded subsignals $ g c ( n )$.