Hybrid Quantum Singular Spectrum Decomposition for Time Series Analysis

Classical data analysis requires computational efforts that become intractable in the age of Big Data. An essential task in time series analysis is the extraction of physically meaningful information from a noisy time series. One algorithm devised for this very purpose is singular spectrum decomposition (SSD), an adaptive method that allows for the extraction of narrow-banded components from non-stationary and non-linear time series. The main computational bottleneck of this algorithm is the singular value decomposition (SVD). Quantum computing could facilitate a speedup in this domain through superior scaling laws. We propose quantum SSD by assigning the SVD subroutine to a quantum computer. The viability for implementation and performance of this hybrid algorithm on a near term hybrid quantum computer is investigated. In this work we show that by employing randomised SVD, we can impose a qubit limit on one of the circuits to improve scalibility. Using this, we efficiently perform quantum SSD on simulations of local field potentials recorded in brain tissue, as well as GW150914, the first detected gravitational wave event.


I. INTRODUCTION
A recently proposed approach to classical time series analysis is singular spectrum decomposition (SSD) [1].SSD aims to decompose non-linear and non-stationary time series into physically meaningful components.In stark contrast to a Fourier decomposition which employs a basis set of harmonic functions, SSD prepares an adaptive basis set of functions, whose elements depend uniquely on the time series.
Applications include, but are not limited to, sleep apnea detection from an unprocessed ECG [2] and the processing and analysis of brain waves [3,4].In this work we will also cover application to gravitational wave analysis.The latter is interesting as real-time data analysis poses a challenge, and will be vital for next generations of gravitational wave detection technology.The main bottleneck for scaling this algorithm up for larger time series is the singular value decomposition (SVD) that is performed in each subroutine.For general m×n matrices, the classical SVD algorithm yields a computational complexity of O (mnmin(m, n)) [5].
There exist clues that allude to quantum computing being able to provide a computational speedup for the computationally taxing SVD subroutine.Currently, quantum computing is in the Noisy Intermediate Scale Quantum (NISQ) era [6].While some forms of error mitigation are proposed, noise still limits the amount * j.j.postema@tue.nl of qubits and the fidelity of gate operations, such that their numbers are too little for full-fledged quantum computers running quantum error correction codes [7].Despite these shortcomings, hybrid quantum computers running variational quantum algorithms provide a useful platform for classically taxing calculations, one major example being the variational quantum eigensolver (VQE) for quantum chemistry [8][9][10].Hybrid algorithms complement the strengths of both processors: classically efficient tasks are performed on a classical computer, while unitary calculations such as quantum state preparation are handed to a quantum computer.Because of the iterative nature of variational algorithms, noise can be mitigated at every step, increasing robustness of the algorithm.
The SVD subroutine can be performed on a quantum computer through the variational quantum singular value decomposition (VQSVD) algorithm [11].Besides its application to SSD, SVD forms a compelling application of hybrid quantum computing in general, because of its versatile utility.In a similar fashion to VQE, one can train two parameterisable quantum circuits (PQCs) to learn the left and right singular vectors of a matrix, and sample the singular values through Hadamard tests.In this work we explore both a gate-based paradigm [9], where the parameterisation is facilitated through rotational quantum gates, and a pulse-based paradigm [12], where we optimise laser pulse profiles on a neutral atom quantum computing platform.The latter is derived from quantum optimal control (QOC) theory, and is thought to be more NISQ efficient.
Our aim is to convert SSD into a variational hybrid quantum-classical algorithm called quantum singular spectrum decomposition (QSSD).Here, we pose the question whether quantum computing can provide an inherent computational speedup for SVD by surpassing the classical complexity.We employ randomised SVD to improve scalibility of the algorithm [13].The classical theory behind SSD is introduced in Sec.II.The implementation of QSSD on a hybrid quantum computer is elaborated on in Sec.III, using a gate-based approach, while in Sec.IV we discuss a pulse-based approach.Simulation results for three different time series, among which the first observation of a gravitational wave, are presented in Sec.V. We conclude with a summary and discussion about the viability of QSSD in Sec.VI.

II. SINGULAR SPECTRUM DECOMPOSITION
The SSD algorithm is based on the singular spectrum analysis (SSA) method, whose basic ideas are presented in Appendix A [1,14].SSA is designed to explore the presence of specific patterns within a time series, while SSD is designed to provide a decomposition into its constituents components.Let x N } be an N -dimensional string of numbers, sampled at a uniform sampling frequency F S from some time series.We are interested in extracting (physically) meaningful components from the signal.By embedding the signal in a matrix, correlation functions between pairs of sample points can be extracted through a singular value decomposition of said matrix.Several types of components can be found, among which trends, oscillations and noise for instance.In SSD, such identification is completely automated by focusing on extracting narrow frequency band components.
The time series is first embedded in a trajectory matrix, however, in comparison to SSA, the number of columns is extended to N and the time series is wrapped around, yielding: .
(1) It is to be understood that every subscript is mod N .The singular value decomposition (SVD) of such a matrix yields where σ i are the singular values, and u i and v i the left and right singular vectors.As explained above, SSD aims to find physically meaningful components g j (n) through a data-driven and iterative approach.Starting from the original signal x(n), at each iteration, a component For iteration j = 1: The power spectral density (PSD) of the signal is calculated.The frequency f max associated with the dominant peak in the PSD is then estimated.If the ratio criterion fmax F S < δ is met, a trend is said to characterise the spectral contents of the residual.Ref. [1] sets δ = 10 −3 .In such a case, the embedding dimension is set to M = N 3 , as described in Ref. [15].The first component g (1) (n) is then extracted from Hankelisation of the matrix X 1 = σ 1 u 1 v 1 .Since most of the energy must be contained within the trend component, only 1 singular value is required for its reconstruction.If the ratio criterion is not met, the window length is defined by M = 1.2 F S fmax [1].
For all iterations j ≥ 2, the embedding dimension is always set equal to M = 1.2 F S fmax .To extract meaningful components, only principal components with a dominant frequency in the band [f max − δf, f max + δf ] are selected.In order to estimate the value of the peak width δf , the Levenberg-Marquardt (LM) algorithm is used.First, the PSD of the residual is approximated with a sum of three Gaussian functions where A is the vector of amplitudes, σ is the vector of Gaussian peak widths and µ i are the peak locations, given by where f 2 is the frequency at which the second most dominant peak is situated.Starting the algorithm from the initial state vector components the algorithm will eventually converge towards optimal A , σ .The peak width is then given by δf = 2.5σ 1 .
Finally, the component g (j) (n) is rescaled with a factor a, so that g(j) (n) = ag (j) (n).Such rescaling makes sure the variance of the residue is minimised.Solving for yields a = g (j), v (j−1) g (j), g (j) [1].The LM algorithm stops when the energy of the remainder has been reduced to a predefined threshold (default at 1%), ensuring proper convergence of SSD.This corresponds to the criterion so that the reconstructed signal reads

III. HYBRID GATE-BASED SINGULAR VALUE DECOMPOSITION
A. SVD quantum algorithm The singular value decomposition (SVD) of a general m × n matrix X ∈ C m×n is given by where U ∈ C m×m and V ∈ C n×n are unitary square matrices, and Σ = diag(σ 1 , σ 2 , • • • , σ r ) ∈ R m×n is a diagonal matrix with all r = rank(X) singular values ordered from highest to lowest on the diagonal.Through a suitable loss function, Wang et al. [11] have shown that this decomposition can be implemented on a quantum computer by training two quantum circuits to learn the unitary matrices U (α) and V (β), parameterised by two sets of trainable parameters α and β.For real matrices, U and V will be orthogonal, which can be efficiently prepared on a quantum computer since SO(N ) ⊂ SU(N ).We focus on this particular problem since data matrices are real.Through inversion of eq. ( 11), one finds that the singular values can be extracted from where u i | is the Hermitian conjugate of the i-th column of U , and |v i is the i-th column of V .The singular values and their associated left and right orthonormal singular vectors allow for a reconstruction of the original matrix X through It is desirable to keep only the first few T eigentriples such that they approximate the matrix X to precision .Suppose that X (T ) = T i=1 σ i |u i v i |, then we find through the Cauchy-Schwarz inequality that the Frobenius norm of the difference is bounded by the squared sum of neglected singular values according to More particularly, the Eckart-Young theorem states that this approximation minimises this error norm, and is therefore the optimal matrix approximation [16].

B. Trajectory matrix implementation
In order to prepare inner products of the form (12) on a quantum computer, we must perform a unitary decomposition, coined a linear combination of unitaries (LCU), of the trajectory matrix: for K ∈ N unitaries e i .Because of the non-square nature of X, we delay the discussion on the implementation of this decomposition to Sec.III F. This set of unitaries is closely related to Pauli operator strings of the form σ i2 ⊗ • • • , but different choices are possible.Throughout this paper we adopt the notation i j ∈ {0, 1, 2, 3} = {0, x, y, z} and (16) Since m-qubit states are elements of a 2 m -dimensional Hilbert space H m = span{|0 , |1 } ⊗m C 2 m , the dimensions of the trajectory matrix must be enforced to be equal to a power of 2. The optimal dimensions of the trajectory matrix X, as dictated by SSD, do not necessarily satisfy these constraints, so that we slightly alter the trajectory matrix dimensions for a VQSVD implementation.First, the optimal embedding dimension is rounded to the nearest power of two.Then, the number of columns is increased to the first power of two through periodic continuation, where the signal is artificially extended through a wrap-around.Defining the qubit numbers of the U (α) and V (β) circuits are given by q m and q n = log 2 (N ) respectively.The trajectory matrix then takes the form Again, it is to be understood that every index is mod N .While not obeying optimal SSD dimensions, this definition makes the most use of the space constrained by the dimensionality restrictions.Additionally, this definition reduces the number of unitaries U i required for the LCU in eq. ( 15), compared to embedding the trajectory matrix in a large null matrix.

C. Randomised QSSD
To obtain a more economic SVD routine, we approximate the SVD of the m × n trajectory matrix X in a low-rank fashion, such that we maintain only the first k singular values: where Σ k is the singular value matrix, with σ p = 0 for all p > k.The computational complexity yields O(mnk), which is a 'lower' complexity than the conventional O(mnmin(m, n)) for k n.By performing this low-rank approximation, we hope to improve the scaling laws of the QSSD algorithm and mitigate the two major problems which persist within the theory laid out by Ref. [11]: the input problem and the output problem.The former refers to the fact that the decomposition of a matrix in a Pauli basis requires us to have a basis set that grows exponentially in the number of qubits.The latter refers to the read-out of the quantum circuits to obtain the matrices U and V , which is also an operation whose computation time is exponential in the number of qubits.
First, we aim to find an optimal basis Q with as few columns as possible, such that [17] This optimal basis can be approximated through a randomised matrix decomposition.Multiplication of the trajectory matrix by a random matrix where the vectors ω i are drawn from the standard distribution N (µ = 0, σ = 1), yields Y = X Ω.This allows for a random sampling of range(X ).Performing a QRdecomposition on Y , and trimming the Q-matrix to a size Q max(m,n)×k , gives the optimal basis Q.This matrix encompasses a basis transformation that lifts X to a lower-dimensional space where the SVD is approximated up to the k-th value.Constructing this transformation gives B = Q X, whose SVD yields We can establish a natural qubit cut-off number that defines a low-dimensional space onto which our trajectory matrix is projected.Because we focus on narrow frequency bands in the PSD, we can predict that we do not need plenty of singular values to reconstruct a signal.Every dominant frequency component yields 2 singular values, and simulations have shown that to obtain a faithful reconstruction of the corresponding signal, we need at most a number of singular values close to ∼ 10 − 12. Therefore, we establish a dimensional cut-off k ≤ k Λ = 16 with associated qubit cut-off q = 4. Henceforth, we shall denote the reduced trajectory matrix as X.

D. The parameterisable quantum circuits
We prepare two quantum circuits that learn to approximate the matrices of left (U (α)) and right (V (β)) singular vectors of the randomised trajectory matrix X simultaneously.Let N and U(θ) represent the qubit number and the unitary circuit representation of one circuit, agnostic to which one we refer to.For both circuits, we initialise the qubits in the vacuum state: and employ the hardware efficient ansatz: where D represents the depth of the circuit, U ent ∈ SU(2 N ) is a general entanglement operator over at most N qubits, and R Y (θ) is a rotational operator generated by the σ y Pauli operator: We opt for this ansatz rather than a set of Euler rotations around the Bloch sphere, since this approach generates real amplitudes which are sufficient for the construction of orthonormal matrices, while simultaneously requiring a lower-dimensional parameter space.We choose a CNOT chain as our entanglement operation:

E. The loss function
Knowing the columns of U and V , the maximum singular value is retrieved from where S i , i ∈ {m, n} is the set of normalised vectors of length m and n respectively.Other singular values are retrieved through a similar fashion, where the vectors |u and |v are restricted to be orthogonal to previous ones.This yields Through the Ky Fan theorem [18], we arrive at a suitable loss function for VQSVD, given by a linear combination of the measured singular values: where } T j=1 are sets of orthonormal vectors that select the right columns and vectors from U and V , respectively.Here, Wang et al.'s linear parameterisation w j = T + 1 − j is adopted [11].Naturally, optimisation aims to find optimal α , β such that The terms Re ψ can be measured on a quantum computer through a Hadamard test, if the matrix X is decomposed into a basis of unitary operators {e i } i such that X = i x i e i .

F. Pseudo-unitary Hadamard Tests
Quantum computing usually deals with square matrices, since they are natural dimensions of operators in a Hilbert space.Decomposition of the trajectory matrix requires us to find a decomposition in a non-square basis, however.We can do so, if we relax the condition of unitarity of U to so that we find a non-square pseudo-unitary basis decomposition.Because we adapted new optimal dimensions for the trajectory matrix to conform to quantum computing, we can construct a pseudo-Pauli basis: where P i is the set of all possible Pauli operators of string length q m , and O is the 2 qm ×2 qm null matrix.It is readily shown that every e i is pseudo-unitary and that together, they form a complete basis for any 2 qm × 2 qn trajectory matrix, so that we can decompose X according to where the expansion coefficients read Implementation of these basis elements in a Hadamard test still requires square matrices.We employ pseudounitary continuation to easily implement these elements by extending them to square matrices that have an easy tensor decomposition in a square Pauli basis, a protocol which is provided in Appendix B. Because the pseudo-Hadamard tests employ matrices U and V of different size, operating on different numbers of qubits, we must alter the controlled circuit subroutine of the Hadamard test as well.This is displayed in Fig. 2. U must be altered through because it only acts on the last q m qubits.

G. Analytical Gradient Optimiser
For the classical optimisation routine, we employ an optimiser based on analytical gradients with adaptive Armijo conditions.Such optimisers circumvent finite difference inaccuracies by directly sampling the loss function at a point in parameter space through a parameter shift [19].The gradient of the loss function ∇L is given by its entries: where e µ , e ν are unit vectors along the α µ /β ν direction in parameter space.A brief proof is provided in Appendix C. At every iteration k, we update the variational parameters along its gradient.The step size η ∈ (0, 1) is determined at every iteration and adapts to the loss function evaluation as well as past iterations.Generally, the parameter update reads At every iteration, we check for the first Wolfe condition, given by for some control parameter µ ∈ (0, 1).If the condition is satisfied, we keep the step size equal to η, otherwise the step size is halved and the condition is checked again.Additionally, we introduce a tracker mechanism τ that can rescale the step size dependent on past steps.Initially set to 0, we update the tracker through τ → τ + 1 if the Wolfe condition is met, otherwise, we set τ → τ −1.
If the tracker hits a threshold ±τ , it is reset to 0, and the step size is doubled or halved.Throughout our work, we adopt an initial step size of η 0 = 10 −6 , a control parameter µ = 10 −4 and a threshold of τ = 3.

H. Orthonormal Matrix Reconstruction
In order to reconstruct Hankel matrices from i σ i |u i v i |, we must perform qubit state tomography over both circuits.Each column of the circuit matrix representation can be read out by sending the orthonormal set {|ψ i } i through the circuits, and sampling all Pauli moments.Such approach is known to require an exponential number of samplings in the number of qubits.Matrix elements of the form (12) are unprotected against global phase invariance, though.In contrast, the expectation value of an Hermitian operator H ψ|H|ψ , are invariant under |ψ → e iϕ |ψ for arbitrary phases ϕ.
The columns of U and V allow relative phase differences of k • π for k ∈ Z, so that VQSVD finds local maxima where the singular values can be measured to be If the i-th singular value is measured to be negative, the sign needs to flipped, and the i-th column of U gains an additional minus sign as well.

IV. PULSE-BASED IMPLEMENTATION FOR QSSD
Besides the gate paradigm, variational quantum optimal control (VQOC) [12] offers an alternative approach to searching through a Hilbert space H (see Ref. [20] for a review on QOC in general).Rather than optimising parameterised gates, we directly optimise the physical controls through which quantum operations are implemented.It is believed that for highly entangled systems, VQOC is able to outperform VQE in terms of accuracy, when resources are comparible.This is a result of the higher expressibility of optimal control, which one can prove by showing that a set of randomly initialised pulses produces unitary operations that resemble the Haar distribution more closely than a set of randomly initialised gates in the hardware efficient ansatz [21].
In this paper, we consider application to a neutral atom quantum computer that is realised on a lattice of trapped Rydberg atoms, whose electronic state manifold offers an embedding of qubit states |0 and |1 .Consider 2 linear arrays of respectively q m and q n atoms, which are treated as 2-level systems where the |0 -state is mapped to a hyperfine electronic ground state, and the |1 -state is mapped to a Rydberg state.Qubit states are addressed by monochromatic laser pulses, which we aim to optimise in order to find an optimal pulse that approximates the target state through propagators U (t) ∈ L(H qm ) and V (t) ∈ L(H qn ).For a total evolution time T and t ∈ (0, T ), the circuit unitaries are subject to the propagator Schrödinger equation Here, H i , i ∈ {U, V } are the total Hamiltonians that drive the evolution of the qubit states.On a neutral atom quantum computing platform, the native qubit Hamiltonian reads where H d is the drift Hamiltonian, which represents 'always-on' interactions in the system, and H c [Z(t)] is the control Hamiltonian, parameterised by the controls Z(t) we impose on our system.For a linear array of N qubits and L ∈ N controls, we find the van der Waals interaction and the control Hamiltonian Here, C 6 characterises the Rydberg-Rydberg interaction strength, R is the interatomic distance between neighbouring qubits, and Q l are control operators associated with our control parameters.In this work, we control the Rabi frequency and laser phase |Ω(t)|e iϕ(t) , and the detuning ∆ of the |1 -state, so that their controls are given by These controls parameterise both circuits by modeling pulses as a set of T /δt ∈ N time-discretised control pulses with duration δt.The physical implementation is a result from smoothing out the piecewise constant equidistant pulses at a minor loss of fidelity.We aim to optimise a new loss function by adjusting (29) to penalise strong controls for λ > 0.
We enforce these constraints by adding time-dependent Lagrange multiplier terms to L QOC : where the multipliers are called adjoints, the inner product is given by and Ω N = L 2 ([0, T ], H N ).To ensure a realistic implementation, we restrict our controls to the space Z ad of admissible pulses (52) In Appendix D, we prove that it is possible to optimise this Lagrangian under these constraints by sampling the necessary quantities on a quantum computer, where we have omitted the proof of existence and uniqueness of the solutions.Our update scheme at iteration k becomes for the U -controls, and idem dito for V .For optimisation, we adopt the same Armijo optimiser protocol where we check for the first Wolfe condition at every step.In Appendix D we also explicitly solve for the adjoints.In conclusion, the VQOC paradigm requires to solve the propagator Schrödinger equations ( 42), (43), the adjoint equations (D14) and (D15), and the analytical gradient (53).

V. ANALYSES AND RESULTS
To benchmark the performance of QSSD, several simulations have been run, using a classical emulator of a noise-free quantum circuit.For efficiency purposes we assume full access to all relevant matrix elements, without resorting to sampling amplitudes through measurements.We compare gate-based and pulse-based results to the retrieved classical SSD components for a variety of applications, though we stress that because of the inherent inefficiency of the algorithm, we have not attempted to make a fair comparison between the two implementations.In order to draw a fair contrast, either implementation needs to be run on equivalent evolution circuits, where both the gate-based and pulse-based have equivalent resources.This alludes to a similar circuit evolution time T , equivalent controls such as Bloch sphere rotational control only, and the number of quantum evaluations #QE VQOC should scale linearly in the quantum resources compared to the number of quantum evaluations #QE VQE .
In this work, we adopted the hardware efficient ansatz for the gate-based circuits, with depth D = 10 and an iteration number it = 1000, and we assumed full control for the pulse-based circuits, in natural units where = 1 and C 6 = 1, for a total evolution time T = 10, a step size discretisation T /δt = 100 and an iteration number it = 150.We aim to show that with general resources, one can approximate the function space of the SVD reasonably well on a quantum computer, without employing equivalent evolution circuits.Note that the VQOC parameters are not fine-tuned for any implementation in particular.For a more realistic implementation on a neutral atom quantum computer, the parameters should lie in experimental bounds, and the space Z ad should be adjusted accordingly.
We first apply QSSD to a simple toy example to showcase that properties of the SSD algorithm, such as pinpointing non-stationarity of a signal, carry over to the quantum algorithm.Then we apply QSSD to simulated cortical local field potentials (LFPs), electric potentials originating from the extracellular space in brain tissue, as well as GW150914 data, resulting from the first observation of a gravitational wave event.As a figure of merit we introduce the error measure where p(ω) is the PSD of the simulated individual band components, and where p SSD (ω) is the PSD of the SSD-recovered component.Λ ω is the relevant frequency band of the PSD over which we integrate, and |Λ ω | is its measure.This error focuses more closely on the frequency content of the retrieved components than an l 2 -norm would, and is not cumulative over time due to small displacements.

A. Non-stationary signal
As a first proof of principle, we applied SSD to the following non-stationary toy signal, composed of two harmonic sinusoidal functions: where θ(•) is the Heaviside function, for t ∈ [0, 1].The classical SSD and QSSD results are presented and compared, in Fig. 3.The figure shows a clear separation of harmonic functions, while also pinpointing the moment when the non-stationary harmonic function begins.In Fig. 6, the error is graphed for both components, for all SSD variations.

C. Gravitational wave event GW150914
Black holes and gravitational waves are characteristic predictions of general relativity, and are therefore favourable probes of the theory.The waveforms of such gravitational waves are constrained by the underlying mechanisms such as their generation and propagation.The analysis of such waves is crucial to the understanding of relativistic gravitational physics, and may provide useful insights in black hole physics, the structure of neutron stars, supernovae and the cosmic gravitational wave background.Gravitational wave analysis will likely become classically intractable in the age of Big Data and third generation detectors such as the Einstein Telescope [23], and can greatly benefit from quantum data analysis algorithms.We applied SSD to gravitational wave analysis since it serves as an interesting addition to the classical gravitational wave data analysis pipeline.Not only is it capable of providing waveforms with a more crisp quality, it is also able to filter out glitches, lowering the risk of a wrong signal-to-noise ratio (SNR) threshold exceedance.
To showcase the applicability of SSD, we applied it to the the first measured gravitational wave event GW150914, eminent from a black hole binary merger, consistent of a pair of black holes with inferred source masses M 1 = 29.1 +3.7 −4.4 M and M 2 = 36.2+5.2 −3.8 M [24,25].We have taken the raw data as measured by the LIGO Hanford detector around the merger time, after which we applied noise whitening, band-passing and notching to obtain an approximate gravitational waveform [26].Through SSD we hope to separate the relevant information about the binary merger from any other unwanted component or noise, thus obtaining a subset of SSD components that can provide a more crisp waveform.The results of SSD filtering are presented in Fig. 5, as well as their respective time-frequency spectrograms.Since we are analysing a real signal rather than a simulated one, we cannot fairly compare the quality of a components to any original benchmark, because we do not know a priori what the true gravitational wave signal should look like, given the right physical parameters.For this reason, we can only provide a figure of merit of the QSSD components with respect to the classical SSD components.The errors are given in Fig. 6, where the comparison with classical SSD is missing for GW150914 because of the aforementioned argument.

VI. CONCLUSION AND DISCUSSION
In this work we have established a unified framework that combines the promising method of singular spectrum decomposition for classical time series analysis with the variational quantum singular value decomposition algorithm into quantum singular spectrum decomposition (QSSD).The singular value decomposition subroutine is designed to be performed on quantum hardware, while the retrieval of the SSD components is done classically.Using analytical gradient optimisers, we circumvent finite difference errors, and we guarantee optimisation at every iteration.QSSD could find interesting applications to near-term gravitational wave analysis because of the variational formulation.Here, we demonstrated this algorithm for providing a more crisp waveform for matched filtering.
QSSD translates a classical algorithm into a hybrid quantum-classical framework.In contrast to the classical algorithm, the quantum formulation does not have efficient access to all matrix elements of U and V .Unpacking the dense trajectory matrix into its unitary basis elements and reading out all the columns of U and V are two processes that required us to perform an exponential number of quantum circuit runs, in the number of qubits q m .An efficient quantum algorithm would rely on knowing only few parameters that are readily measured on a quantum device, while employing few resources.To mitigate the effects of the exponential scaling laws, we have adopted randomised SVD.While rendering QSSD more economic, the same trick can be applied to the classical algorithm, providing no inherent quantum speedup.An efficient rendition of SSD would circumvent measuring |u i and |v i , and instead use a different pipeline to read out SSD components.For further research, we pose the question whether this is possible within this variational framework.
The quality of SSD is related to the quality of the singular vectors, not the singular values, as dictated by Eq. ( 8), since a rescaling is performed after every iteration to ensure optimal energy extraction.It is therefore crucial for the convergence of the quantum routine to select a circuit ansatz that can approximate the solution space well.For the gate-based approach we have adopted the hardware efficient ansatz (HEA), and for the pulse-based approach we have assumed full control.Despite the HEA being often invoked as a standard method for quantum state preparation, there is a lack of understanding on its capabilities.Since data analysis does not know an effective design or model, we pose the question if an effective model can be established for specific time series applications, and if certain ansätze, either gate-based and pulse-based, can improve convergence for finding the orthonormal matrices.
In conclusion, it remains an open problem whether a true quantum speedup can be achieved through the method of QSSD.The variational formulation of this quantum algorithm does not allow for polynomial scaling laws in the number of qubits, and requires measuring exponentially many quantum amplitudes.At the same time, as mentioned above, several aspects of the theory can be improved upon, which are not necessarily restricted to QSSD applications only.For quantum state preparation purposes, it is interesting to investigate if one could apply quantum natural gradient (QNG) theory to the optimiser routine to improve convergence [27], or quantify the geometry of the search space to find better initial states [28].Overall, results from this preliminary attempt to translate SSD to a quantum computing framework show that this should be likely achievable with proper improvements in the theory and methods employed.In turn, this may pave the way to a true quantum speedup of QSSD over classical SSD.We leave the adaptations of all these ideas to future studies of QSSD and alternative approaches to time series analysis on a quantum computer.

. (C3)
Because the entanglement operations are unparameterised, we can relate this derivative to a shift in the α µ parameter: In a similar spirit, we find This completes the proof.
Appendix D: Quantum Optimal Control for SVD In this Appendix, we prove that QOC finds a suitable application in performing SVD on a quantum computer, by showing that the quantities we must measure for optimisation can be prepared on a quantum computer.Additionally, we formally lay out some of the QOC theory laid out in Ref. [12] and tweak it to describe an SVD application.The Lagrangian L QOC for SVD applications is given by where U and V are the unitary representations of the pulses on circuit U and V respectively, Z U and Z V represent the controls over the circuits, and η U and η V are the adjoints.The first term represents the loss function we want to minimise regardless of optimal control: for some regularisation parameter λ, which we will fix at λ = 10 −4 .We also add two Lagrange multiplier terms that makes sure that the unitaries that are generated by the pulses are constrained to satisfy the Schrödinger equation.These are given by  In a similar fashion we find The derivatives with respect to the pulse controls yield t)dt , (D16)

Figure 1 :
Figure 1: Flowchart representation of the hybrid VQSVD algorithm.As an input, we choose the trajectory matrix X, decomposed into a non-square basis provided in Sec.III F. Two parameterisable quantum circuits U (α) and V (β) are initiated, and subsequently trained to learn the set of left and right singular vectors (blue block).To achieve this, a hybrid loop is initiated (orange block): on a quantum processing unit, Hadamard tests approximate the singular values, while a classical loss function optimisation routine updates the variational parameters.Once the optimisation procedure has hit a threshold convergence condition, all relevant singular vectors are read out by sending in computational basis states through U (α) and V (β) to extract all columns, and all estimated singular values σj are given by the final set of Hadamard tests.In Sec.IV, we employ a pulse-based approach to approximate the left and right singular matrices (blue block).

Figure 2 :
Figure 2: The circuit to perform a Hadamard test as given in Ref. [11] ((a)) versus the Hadamard test circuit we employed for non-square matrix decompositions ((b)).H represents the Hadamard gate, and the number of qubits in the bottom register equals qn. e cont j is the pseudo-unitary continuation of the basis element ej, as given in Appendix B.

Figure 3 :
Figure 3: An example of a non-stationary function s(t) comprised of two sinusoidal components, in arbitrary units (a.u.), for t ∈ [0, 1].The two original components are shown in (a), and the SSD components are found using classical SSD (b), gate-based QSSD (c) and pulse-based QSSD (d).A sampling frequency FS = 256 Hz was chosen.The non-stationary character of the function is picked up upon by the algorithm in all variations of SSD.

1 0 1 2Figure 4 :Figure 5 :Figure 6 :
Figure 4: (a) The simulated LFP data, where the potential field strength (in arbitrary units) is plotted as a function of time for t ∈ [0, 1], at a sampling frequency of = 256 Hz.The signal consists of 4 components, where the alpha and theta components are grouped together because their frequency ranges show strong overlap.The beta, gamma and alpha-theta components are shown in (b).Every column is colour coded to represent a different frequency component: beta (blue), gamma (red) and alpha-theta (green).The SSD recovered components are shown for classical SSD (c), gate-based QSSD (d) and pulse-based QSSD (e).The field strength units are arbitrary in (b-e), and time runs from t = 0 to t = 1 sec, omitted for clarity.