Classical data analysis requires computational efforts that become intractable in the age of Big Data. An essential task in time series analysis is the extraction of physically meaningful information from a noisy time series. One algorithm devised for this very purpose is singular spectrum decomposition (SSD), an adaptive method that allows for the extraction of narrow-banded components from non-stationary and non-linear time series. The main computational bottleneck of this algorithm is the singular value decomposition (SVD). Quantum computing could facilitate a speedup in this domain through superior scaling laws. We propose quantum SSD by assigning the SVD subroutine to a quantum computer. The viability for implementation and performance of this hybrid algorithm on a near term hybrid quantum computer is investigated. In this work, we show that by employing randomized SVD, we can impose a qubit limit on one of the circuits to improve scalibility. Using this, we efficiently perform quantum SSD on simulations of local field potentials recorded in brain tissue, as well as GW150914, the first detected gravitational wave event.
I. INTRODUCTION
A recently proposed approach to classical time series analysis is singular spectrum decomposition (SSD).1 SSD aims to decompose non-linear and non-stationary time series into physically meaningful components. In stark contrast to a Fourier decomposition which employs a basis set of harmonic functions, SSD prepares an adaptive basis set of functions, whose elements depend uniquely on the time series. Applications include, but are not limited to, sleep apnea detection from an unprocessed electrocardiogram (ECG)2 and the processing and analysis of brain waves.3,4 In this work, we will also cover application to gravitational wave analysis. The latter is interesting as real-time data analysis poses a challenge, and will be vital for next generations of gravitational wave detection technology. The main bottleneck for scaling this algorithm up for larger time series is the singular value decomposition (SVD) that is performed in each subroutine. For general m × n matrices, the classical SVD algorithm yields a computational complexity of .5
Certain matrix operations are more efficiently performed on a quantum computer, raising the question whether a quantum speedup can be acquired for SVD applications. Currently, quantum computing is in the Noisy Intermediate Scale Quantum (NISQ) era.6 While some forms of error mitigation are proposed, noise still limits the amount of qubits and the fidelity of gate operations, such that their numbers are too little for full-fledged quantum computers running quantum error correction codes.7 Despite these shortcomings, hybrid quantum computers running variational quantum algorithms provide a useful platform for classically taxing calculations, one major example being the variational quantum eigensolver (VQE) for quantum chemistry.8–10 Hybrid algorithms complement the strengths of both processors: classically efficient tasks are performed on a classical computer, while unitary calculations such as quantum state preparation are handed to a quantum computer. Because of the iterative nature of variational algorithms, noise can be mitigated at every step, increasing robustness of the algorithm.
The SVD subroutine can be performed on a quantum computer through the variational quantum singular value decomposition (VQSVD) algorithm.11 In addition to its application to SSD, SVD forms a compelling application of hybrid quantum computing in general, because of its versatile utility. In a similar fashion to VQE, one can train two parameterizable quantum circuits (PQCs) to learn the left and right singular vectors of a matrix, and sample the singular values through Hadamard tests. In this work, we explore both a gate-based paradigm,9 where the parameterization is facilitated through rotational quantum gates, and a pulse-based paradigm,12 where we optimize laser pulse profiles on a neutral atom quantum computing platform. The latter is derived from quantum optimal control (QOC) theory, and is thought to be more NISQ efficient. Other quantum algorithms for SVD have been proposed,13,14 but require fault-tolerant resources such as quantum random access memory or oracles.
Our aim is to convert SSD into a variational hybrid quantum-classical algorithm called quantum singular spectrum decomposition (QSSD). Here, we pose the question whether quantum computing can provide an inherent computational speedup for SVD by surpassing the classical complexity. We employ randomized SVD to improve scalability of the algorithm.15 The classical theory behind SSD is introduced in Sec. II. The implementation of QSSD on a hybrid quantum computer is elaborated on in Sec. III, using a gate-based approach, while in Sec. IV we discuss a pulse-based approach. Simulation results for three different time series, among which the first observation of a gravitational wave, are presented in Sec. V. We conclude with a summary and discussion about the viability of QSSD in Sec. VI.
II. SINGULAR SPECTRUM DECOMPOSITION
The SSD algorithm is based on the singular spectrum analysis (SSA) method, whose basic ideas are presented in Appendix A.1,16 SSA is designed to explore the presence of specific patterns within a time series, while SSD is designed to provide a decomposition into its constituents components. Let be an N-dimensional string of numbers, sampled at a uniform sampling frequency FS from some time series. We are interested in extracting (physically) meaningful components from the signal. By embedding the signal in a matrix, correlation functions between pairs of sample points can be extracted through a singular value decomposition of said matrix. Several types of components can be found, among which trends, oscillations, and noise for instance. In SSD, such identification is completely automated by focusing on extracting narrow frequency band components.
For iteration j = 1:
The power spectral density (PSD) of the signal is calculated. The frequency associated with the dominant peak in the PSD is then estimated. If the ratio criterion is met, a trend is said to characterize the spectral contents of the residual. Reference 1 sets . In such a case, the embedding dimension is set to , as described in Ref. 17. The first component is then extracted from Hankelization of the matrix . Since most of the energy must be contained within the trend component, only 1 singular value is required for its reconstruction. If the ratio criterion is not met, the window length is defined by .1
For iterations :
III. HYBRID GATE-BASED SINGULAR VALUE DECOMPOSITION
A. SVD quantum algorithm
B. Trajectory matrix implementation
Again, it is to be understood that every index is mod N. While not obeying optimal SSD dimensions, this definition makes the most use of the space constrained by the dimensionality restrictions. Additionally, this definition reduces the number of unitaries Ui required for the LCU in Eq. (15), compared to embedding the trajectory matrix in a large null matrix.
C. Randomized QSSD
We can establish a natural qubit cutoff number that defines a low-dimensional space onto which our trajectory matrix is projected. Because we focus on narrow frequency bands in the PSD, we can predict that we do not need plenty of singular values to reconstruct a signal. Every dominant frequency component yields 2 singular values, and simulations have shown that to obtain a faithful reconstruction of the corresponding signal, we need at most a number of singular values close to . Therefore, we establish a dimensional cutoff with associated qubit cutoff q = 4. Henceforth, we shall denote the reduced trajectory matrix as X.
D. The parameterizable quantum circuits
E. The loss function
F. Pseudo-unitary Hadamard tests
G. Analytical gradient optimizer
H. Orthonormal matrix reconstruction
IV. PULSE-BASED IMPLEMENTATION FOR QSSD
In addition to the gate paradigm, variational quantum optimal control (VQOC)12 offers an alternative approach to searching through a Hilbert space (see Ref. 22 for a review on QOC in general). Rather than optimizing parameterized gates, we directly optimize the physical controls through which quantum operations are implemented. It is believed that for highly entangled systems, VQOC is able to outperform VQE in terms of accuracy, when resources are compatible. This is a result of the higher expressibility of optimal control, which one can prove by showing that a set of randomly initialized pulses produces unitary operations that resemble the Haar distribution more closely than a set of randomly initialized gates in the hardware efficient ansatz.23
V. ANALYSES AND RESULTS
To benchmark the performance of QSSD, several simulations have been run, using a classical emulator of a noise-free quantum circuit. For efficiency purposes we assume full access to all relevant matrix elements, without resorting to sampling amplitudes through measurements. We compare gate-based and pulse-based results to the retrieved classical SSD components for a variety of applications, though we stress that because of the inherent inefficiency of the algorithm, we have not attempted to make a fair comparison between the two implementations. In order to draw a fair contrast, either implementation needs to be run on equivalent evolution circuits, where both the gate-based and pulse-based have equivalent resources. This alludes to a similar circuit evolution time T, equivalent controls, such as Bloch sphere rotational control only, and the number of quantum evaluations QEVQOC should scale linearly in the quantum resources compared to the number of quantum evaluations QEVQE.
In this work, we adopted the hardware efficient ansatz for the gate-based circuits, with depth D = 10 and an iteration number it = 1000, and we assumed full control for the pulse-based circuits, in natural units where and , for a total evolution time T = 10, a step size discretization and an iteration number it = 150. We aim to show that with general resources, one can approximate the function space of the SVD reasonably well on a quantum computer, without employing equivalent evolution circuits. Note that the VQOC parameters are not fine-tuned for any implementation in particular. For a more realistic implementation on a neutral atom quantum computer, the parameters should lie in experimental bounds, and the space should be adjusted accordingly.
A. Non-stationary signal
B. Local field potential signal
Next, we applied SSD to simulated local field potentials (LFPs), which represent the relatively slow varying temporal components of the neural signal, picked up from within a few hundreds of microns of a recording electrode. Brain waves can be separated into separate characteristic frequency bands called alpha [8–12 Hz], beta [13–30 Hz], gamma [30–90 Hz], and theta [4–7 Hz] bands, corresponding to the frequency range in which their dominant frequency lies.1 We simulated these LFPs by superimposing four components, each in every aforementioned band.24 Because the alpha and theta components have overlapping frequency contents, we grouped them together as SSD cannot distinguish them. In Fig. 4, we show that SSD is capable of unentangling the superimposed signal into its characteristic components within reasonable accuracy bounds. The corresponding errors are graphed in Fig. 6 for all SSD variations.
C. Gravitational wave event GW150914
Black holes and gravitational waves are characteristic predictions of general relativity, and are therefore favorable probes of the theory. The waveforms of such gravitational waves are constrained by the underlying mechanisms, such as their generation and propagation. The analysis of such waves is crucial to the understanding of relativistic gravitational physics and may provide useful insights into black hole physics, the structure of neutron stars, supernovae and the cosmic gravitational wave background. Gravitational wave analysis will likely become classically intractable in the age of Big Data and third generation detectors such as the Einstein Telescope,25 and can greatly benefit from quantum data analysis algorithms. We applied SSD to gravitational wave analysis since it serves as an interesting addition to the classical gravitational wave data analysis pipeline. Not only is it capable of providing waveforms with a more crisp quality, it is also able to filter out glitches, lowering the risk of a wrong signal-to-noise ratio (SNR) threshold exceedance.
To showcase the applicability of SSD, we applied it to the first measured gravitational wave event GW150914, eminent from a black hole binary merger, consistent of a pair of black holes with inferred source masses and .26,27 We have taken the raw data as measured by the LIGO Hanford detector around the merger time, after which we applied noise whitening, band-passing and notching to obtain an approximate gravitational waveform.28 Through SSD, we hope to separate the relevant information about the binary merger from any other unwanted component or noise, thus obtaining a subset of SSD components that can provide a more crisp waveform. The results of SSD filtering are presented in Fig. 5, as well as their respective time–frequency spectrograms. Since we are analyzing a real signal rather than a simulated one, we cannot fairly compare the quality of a components to any original benchmark, because we do not know a priori what the true gravitational wave signal should look like, given the right physical parameters. For this reason, we can only provide a figure of merit of the QSSD components with respect to the classical SSD components. The errors are given in Fig. 6, where the comparison with classical SSD is missing for GW150914 because of the aforementioned argument.
VI. CONCLUSION AND DISCUSSION
In this work, we have established a unified framework that combines the promising method of singular spectrum decomposition for classical time series analysis with the variational quantum singular value decomposition algorithm into quantum singular spectrum decomposition (QSSD). In the NISQ era, variational algorithms are favorable over other types, because the quantum constituent is more robust against errors. The singular value decomposition subroutine is designed to be performed on quantum hardware, while the retrieval of the SSD components is done classically. Using analytical gradient optimizers, we circumvent finite difference errors, and we guarantee optimization at every iteration. SSD enhances protection against errors accumulating in the quantum algorithm. QSSD could find interesting applications to near-term gravitational wave analysis because of the variational formulation. Here, we demonstrated this algorithm for providing a more crisp waveform for matched filtering.
In contrast to the classical algorithm, the quantum formulation does not have efficient access to all matrix elements of U and V. Unpacking the dense trajectory matrix into its unitary basis elements and reading out all the columns of U and V are two processes that require the performance of an exponential number of quantum circuit runs, in the number of qubits. An efficient quantum algorithm would rely on sampling few parameters that are readily measured on a quantum device. To mitigate the effects of the exponential scaling laws, we have adopted randomized SVD. While rendering QSSD more economic, the same trick can be applied to the classical algorithm, providing no inherent quantum speedup. An efficient rendition of SSD would circumvent measuring and , and instead use a different pipeline to read out SSD components. For further research, we pose the question whether this is possible within this variational framework.
The quality of SSD is related to the quality of the singular vectors, not the singular values, as dictated by Eq. (8), since a rescaling is performed after every iteration to ensure optimal energy extraction. It is, therefore, crucial for the convergence of the quantum routine to select a circuit ansatz that can approximate the solution space well. For the gate-based approach, we have adopted the hardware efficient ansatz, and for the pulse-based approach, we have assumed full control. Quantum computing has the advantage of computing within an exponentially large Hilbert space, but the variational approach of NISQ-type algorithms gives rise to weak convergence in the absence of an effective model that leads to the required target states. Despite the HEA being often invoked as a standard method for quantum state preparation, there is a lack of understanding of its capabilities.29 Since data analysis do not know an effective design or model, we pose the question if an effective model can be established for specific time series applications.
In conclusion, it remains an open problem whether a true quantum speedup can be achieved through the method of QSSD. The necessity to use dense classical matrices in a quantum algorithm does not allow for polynomial scaling laws in the number of qubits, and requires measuring exponentially many quantum amplitudes. Other methods have been proposed that claim a quantum speedup for singular value related algorithms.13,14 The application of either of these algorithms would not circumvent the core issue of having to sample exponentially many quantum amplitudes, since they provide singular vectors as quantum states that must be read out. In general, this issue affects all quantum algorithms that require the readout of a dense matrix.
At the same time, as mentioned above, several aspects of the theory can be improved upon, which are not necessarily restricted to QSSD applications only. For quantum state preparation purposes, it is interesting to investigate if one could apply quantum natural gradient (QNG) theory to the optimizer routine to improve convergence,30 or quantify the geometry of the search space to find better initial states.29 Overall, the results from this preliminary attempt to translate SSD to a quantum computing framework show that this should be likely achievable with proper improvements in the theory and methods employed. In turn, this may pave the way to a true quantum speedup of QSSD over classical SSD. We leave the adaptations of all these ideas to future studies of QSSD and alternative approaches to time series analysis on a quantum computer.
ACKNOWLEDGEMENTS
The authors thank Robert de Keijzer, Madhav Mohan, and Menica Dibenedetto for discussions. This research is financially supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL programme, and by the Netherlands Organisation for Scientific Research (NWO) under Grant No. 680.92.18.05.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Jasper Postema: Formal analysis (equal); Methodology (lead); Software (equal); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). Pietro Bonizzi: Formal analysis (equal); Software (equal); Supervision (equal); Writing – review & editing (equal). Gideon Koekoek: Supervision (equal); Writing – review & editing (equal). Ronald Leonard Westra: Supervision (equal); Writing – review & editing (equal). Servaas Kokkelmans: Funding acquisition (lead); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon request. The GW150914 gravitational wave event data that support the findings of this study are openly available in GWOSC at https://www.gw-openscience.org/events/GW150914/, Ref. 27. Simulations were performed using QuTiP.31
APPENDIX A: PRELIMINARIES—SINGULAR SPECTRUM ANALYSIS
Suppose a time series is sampled N times at a steady rate FS to obtain the string . Then, the SSA algorithm works as follows:
- The time series is turned into an M × K Hankel matrix, where M is the embedding dimension and . This gives the trajectory matrix
Such a matrix encapsulates correlations between the data points. By tuning the embedding dimension just right, SSA can faithfully extract sub-components. If M is too low, not enough correlation can be captured through the trajectory matrix. If M is too large, however, ghost oscillations called spurious components will be captured.
- An SVD is performed on the trajectory matrix, yielding , with all relevant notation defined in Eq. (11). One can further decompose a rectangular matrix X of rank n into a sum of rank-1 matrices according to
The set is referred to as the ith eigentriple for fixed i.
- The index set is partitioned into disjoint subsets . Each element Ii contains a set of principal components that capture a specific component in a time series (like trend, oscillations, noise, etc.) The original trajectory matrix is then decomposed into
where each matrix on the right-hand side represents a specific component series .
-
Sub-components are reconstructed by Hankelization of each , where the matrix components of each cross-diagonal are averaged over. Reverse engineering those Hankel matrices gives the embedded sub-signals .