We present a data-driven and interpretable approach for reducing the dimensionality of chaotic systems using spectral submanifolds (SSMs). Emanating from fixed points or periodic orbits, these SSMs are low-dimensional inertial manifolds containing the chaotic attractor of the underlying high-dimensional system. The reduced dynamics on the SSMs turn out to predict chaotic dynamics accurately over a few Lyapunov times and also reproduce long-term statistical features, such as the largest Lyapunov exponents and probability distributions, of the chaotic attractor. We illustrate this methodology on numerical data sets including delay-embedded Lorenz and Rössler attractors, a nine-dimensional Lorenz model, a periodically forced Duffing oscillator chain, and the Kuramoto–Sivashinsky equation. We also demonstrate the predictive power of our approach by constructing an SSM-reduced model from unforced trajectories of a buckling beam and then predicting its periodically forced chaotic response without using data from the forced beam.

In high-dimensional systems, inertial manifolds containing the low-dimensional chaotic attractor play a significant role in understanding and predicting chaotic behaviors. Such low-dimensional, attracting inertial manifolds contain the essential dynamics of the system and, therefore, offer a tool for model reduction. We construct such inertial manifolds as spectral submanifolds (SSMs) emanating from an unstable steady state and develop SSM-reduced models on them. SSM-reduced models of chaotic systems give accurate short-term predictions and simultaneously reproduce long-term statistical properties. We further demonstrate the predictability of such SSM-reduced models on several numerical data sets including the delay-embedded Lorenz and Rössler attractors, forced Duffing oscillator chain, the Kuramoto–Sivashinsky equation, and a periodically forced buckling beam.

## I. INTRODUCTION

Chaos is ubiquitous in a variety of systems, varying from the double pendulum to turbulent flows.^{1–3} Most of these chaotic systems are high-dimensional but their chaotic dynamics take place in low-dimensional attractors. The dynamics within these strange attractors provides an ideal reduced model for the full system, as long as appropriate localized coordinates can be constructed along the attractor. This typically requires the construction of a smooth, low-dimensional manifold into which the attractor is properly embedded.

Based on Whitney’s^{4} and Takens’s^{5} embedding theorems, several algorithms have been developed for attractor reconstruction and for learning appropriate coordinates near those attractors from delay-embedded data.^{6–8} In general, a higher embedding dimension preserves more information at the cost of increasing system dimensionality. To infer the optimal embedding dimension and delay embedding time lag, using average mutual information (AMI)^{9} and the false nearest neighbor (FNN) method^{10} has become a standard practice. For high-dimensional time series, the dimensionality of the measurements is typically higher than that of the intrinsic system dynamics, requiring dimensional reduction for system identification and modeling.

For this purpose, it is customary to first perform model reduction and then fit a low-dimensional chaotic model in the reduced space. Frequently used linear model reduction techniques, such as the principal component analysis (PCA)^{11} or the dynamic mode decomposition (DMD) and its variants,^{12} cannot capture intrinsically non-linearizable phenomena such as chaos. To counter this limitation, several nonlinear model reduction techniques have been developed, including kernel PCA and local linear embedding.^{13,14} Other popular methods for data-driven model reduction include autoencoders, deep learning, and neural networks. These methods aim to extract hidden information with multiple levels of representation from nonlinear and complex dynamical systems. Various algorithms for black-box models have been developed, compared, and applied in various fields, including fluid dynamics and neural science.^{15,16}

To improve the physical interpretability of models produced by these approaches, previous studies have utilized manifold learning^{17} or the concept of inertial manifolds.^{18} Inertial manifolds are smooth, finite-dimensional invariant manifolds that contain the global attractor and attract all solutions exponentially. In principle, therefore, inertial manifolds are the perfect tool for reducing complex systems. To utilize this tool, a recent method has been developed involving training neural networks to learn an atlas of low-dimensional inertial manifolds in high-dimensional spatiotemporal chaotic systems,^{19,20} which show significant promise.

Subsequent to model reduction, modeling chaos in the reduced coordinates is a further challenge. Such reduced models can be classified into local models and global models.^{21,22} Local models focus on capturing the dynamics within a small region of the state space for short-term prediction. An example of local chaos modeling is the $k$-th nearest neighbor (kNN) method,^{23} which only relies on neighboring trajectory information to make forecasts. Although this method is inherently discrete and contains a large number of parameters, it has been proven to be practically useful when a significant amount of data is available.^{21} In contrast, global models aim to capture the overall behavior across the whole phase space. Popular global modeling techniques include neural ODEs^{24} and recurrent neural networks, such as long short-term memory networks (LSTM).^{25,26} While these neural network models can make forecasts over short Lyapunov times and reproduce statistical properties of the system, they often require a large time domain for training data, extensive parameter tuning, and trial-and-error model selection.

A data-driven modeling alternative that offers simplicity and interpretability is the Sparse Identification of Nonlinear Dynamical Systems (SINDy) approach, which applies sparse regression to fit the right-hand side of a system of ODEs to the time-history of select observables that are assumed to form a closed, reduced dynamical system.^{27} While this methodology has been successful in several settings,^{28,29} its results are dependent on the choice of the selected function library, require an *a priori* knowledge of a reduced observable set, and suffer from sensitivity to noise. For increased interpretability, recent developments combine DMD or SINDy with physical principles.^{30,31}

In this work, we introduce a data-driven model reduction method for chaotic dynamics using spectral submanifolds. A spectral submanifold (SSM) is the smoothest nonlinear continuation of a non-resonant spectral subspace of the linear part of a system at a stationary state.^{32} This stationary state can be a fixed point, periodic orbit, or quasiperiodic orbit. SSM-based model reduction has so far been used in the data-driven modeling of non-linearizable and deterministic dynamics in mechanical systems.^{33} SSM-reduction has also been successfully combined with model-predictive control of soft robots^{34,35} and augmented with a multivariate delay-embedding technique.^{36} In recent work, the mathematical theory behind SSM-reduction has been extended to fractional and mixed-mode SSMs,^{37} with the latter type of SSM also containing unstable modes in addition to the slowest stable modes.

This paper aims to further extend data-driven SSM-reduction to chaotic systems. Given an observable space that has a dimension higher than the Lyapunov dimension of the strange attractor, we seek to reconstruct a mixed-mode SSM that contains the chaotic attractor. Such an SSM then constitutes an explicitly constructed, very low-dimensional inertial manifold that is anchored to a stationary state. We illustrate examples that mixed-mode, SSM-reduced models of chaotic systems give accurate short-term predictions while they simultaneously reproduce long-term statistical properties, such as the Lyapunov exponent and probability distribution density.

## II. CHAOTIC ATTRACTORS ON SPECTRAL SUBMANIFOLDS

### A. Spectral submanifold theory

^{32,38}Recent work by Haller

*et al.*

^{37}extended this result to general hyperbolic fixed points with general (like-mode or mixed-mode) spectral subspaces. (Like-mode spectral subspaces contain only eigenspaces of the same stability type, in contrast to their mixed-mode counterparts.) In this paper, we focus only on the unique, $ C \u221e$-smooth primary SSMs, $ W(E)$, that are tangent to mixed-mode spectral subspaces $E$ of hyperbolic fixed points.

^{37}for any positive integer $ K\u22652$, there exists a unique set of coefficients stored in a matrix $G$ such that in the $(u,z,v,w)$ coordinates introduced above, the SSM, $ W(E)$, tangent to a spectral subspace $E$ can locally be written near $x=0$ as a $ C \u221e$ graph over the $(u,z)$ variables

The open source Matlab software SSMTool^{39} can compute a Taylor series approximation of the SSM, $ W(E)$, up to any required order when system (1) is explicitly known. When system (1) is only known from data, then the open source SSMLearn^{38} algorithm can approximate $ W(E)$ purely from observed trajectories.

### B. SSM reconstruction via delay embedding

In practice, physical systems in which all state variables can be simultaneously monitored are rare. This necessitates the use of invariant manifold reconstruction methods relying only on a limited number of observables. Specifically, using the Takens delay-embedding theorem,^{5,40} it is possible to embed an invariant manifold of dimension $d$ in the space of $2d+1$ (or higher) time-shifted samples of a single, generic scalar observable. Recently, Ref. 36 discussed the use of delay embedding to recover specifically SSMs and their tangent spaces in such observable spaces.

The map $\Psi $ is thus a mapping from the phase space of system (1) to the delay-embedded space $ R m$. Let $ F \Psi : R m\u21a6 R m$ be the flow map induced by the flow of system (1) on this delay embedding space. By construction, if the observation map $\zeta $ maps $0$ to $0$, the $x=0$ equilibrium point of system (1) is mapped into a fixed point $y=0$, i.e., $ F \Psi (0)=0$. According to the Takens embedding theorem,^{5,40} for a generic observable function $\zeta $ and under certain nondegeneracy conditions on the sampling time $ T s$, if $m>2d$, then the delay coordinate map $\Psi $ restricted to $ W$ provides an embedding of $ W$ into $ R m$. As a consequence, $ W ~=\Psi ( W)$ is diffeomorphic to the manifold $ W$.

For dynamical systems with (generally non-smooth) global attractors, inertial manifolds have been defined as finite-dimensional, forward-invariant manifolds that are at least Lipschitz and contain the global attractor.^{18,41} Therefore, attracting SSMs with an attractor in their reduced dynamics function as inertial manifolds, at least locally near a stationary state. If the attractor within the SSM is globally unique, then an SSM is, in fact, an inertial manifold. Unlike general inertial manifolds, however, SSMs can be precisely located and computed up to arbitrary order of accuracy, which makes them the perfect tools for a mathematically precise reduction of high-dimensional systems to very low-dimensional models.

In order to contain the attractor, the dimension of the SSM should be larger than the Hausdorff dimension of the attractor.^{42} In a data-driven setting with no prior information on the attractor dimension, the necessary embedding dimension is often estimated via the false nearest neighbors (FNN) method.^{1,43} This method is based on the idea that if the embedding dimension is too low, neighboring points in the embedding space may appear closer than they actually are, leading to false neighbors. Specifically, the FNN algorithm examines data points in the delay-embedding space and checks if they are indeed close to each other in all directions. The percentage of the false nearest neighbors goes down when the embedding dimension is increased. When the percentage becomes lower than a predefined value, the data disentangle in the embedding space, thus giving a correct approximation for the attractor dimension. The SSMs we construct will have a dimension of $d$ that is at least as large as the estimate obtained from the FNN algorithm.

## III. LEARNING SPECTRAL SUBMANIFOLDS FROM DATA

According to the SSM theory discussed in Sec. II A, the primary SSM, $ W(E)$, tangent to the spectral subspace $E$ at the origin, can locally be written as a polynomial expansion over $E$. The open source Matlab and Python implementations of SSMLearn^{44} compute the SSM parametrization and its reduced dynamics purely from observed data.^{33,38} In this section, we discuss the computation of the SSM-reduced dynamics and predictions from such reduced models of the full system dynamics.

### A. Data-driven SSM-based dimensionality reduction

To provide a clearer explanation, in this section, we only consider $q=0$, i.e., the chosen spectral subspace $E$ only contains $p$ real eigenvalues. Our procedure, however, extends directly to spectral subspaces with complex eigenvalues, such as those arising in Ref. 37. The applications of numerical datasets in Sec. IV contain examples with general choices of spectral subspaces, i.e., with both $q=0$ and $q\u22600$.

*invariance error*as follows. From a specific observation $ y i\u22600$, we compute the corresponding reduced coordinate $ \eta i= V 1 T y i$ as the projection of $ y i$ onto the tangent space $ V 1$. By lifting $ \eta i$ to the full embedding space via the manifold parametrization (14) and comparing it with the original data point $ y i$, we obtain a measurement of the manifold fitting accuracy for a given choice of the approximation order $ K$. For a total of $P$ observations, we denote the lifted data points by $M(\eta )=M( V 1 T y i)$ and define the normalized invariance error as

### B. Prediction from SSM-reduced order models

We obtain a model from the reduced dynamics on the SSM by fitting the right-hand side of an ODE to the numerically determined, SSM-reduced vector field in the reduced coordinates. In this paper, we employ two alternative approximations for the reduced dynamics models: a polynomial fit and a nearest neighbor interpolation.

A global polynomial representation of the reduced dynamics may require very high-order polynomials, and yet fail to reproduce the chaotic dynamics.^{21} Also, the lack of training data outside of the attractor may lead to limited data structures in monomial spaces, which makes it difficult to fit a globally stable polynomial model to vector fields with chaotic trajectories. As an alternative, for complicated strange attractors, we turn to the simplest and earliest method of local forecasting:^{45,46} we fit maps locally to data points, and then use the resulting collection of local polynomials to form a global model.

^{23,47}identifies the $k$ nearest neighbors in the training data set and then, by using various interpolation or regression methods, makes local predictions based on those neighbors.

^{48}This method is suitable for predicting highly nonlinear or even chaotic behavior.

^{49}In this paper, for a prediction based at a point $\eta $, we first search for its $k$-th nearest neighbor $ \eta j$, $j=1,\u2026,k$ in the training data set that minimizes the Euclidean norm $\Vert \eta \u2212 \eta j \Vert 2$. We then use a zeroth-order linear fit to give the prediction $ \eta \u2217$ at the next time step, i.e., define the weight $ w j$ as

Although the kNN method, like other local chaos prediction methods, has the limitation of discontinuities, large numbers of adjustable parameters, and the ability to predict only within the training data range, as long as the training data is sufficiently dense on the attractor, the method still has good predictive power.

In this paper, we only use polynomial regression and the kNN method to fit the reduced dynamics. After obtaining the data sets in reduced coordinates, however, it is also possible to combine SSM-based model reduction with other forecasting techniques. We discuss this further in the Conclusions.

### C. Reconstruction criteria

For non-chaotic dynamics, the prediction errors from SSM-reduced models are quantified by the normalized mean-trajectory-error (NMTE),^{38} which is the normalized difference between the test trajectory and the predicted trajectory. Chaotic dynamics, however, have sensitive dependence on initial conditions, which implies that the predictions for individual trajectories will invariably fail for longer times. Indeed, even a well-constructed reduced model can only be expected to capture statistical measures of the chaotic dynamics in long-term predictions. Accordingly, in this paper, we will evaluate the ability of SSM-reduced models to reproduce both the Lyapunov exponent and the probability density distribution of the attractors.

^{50,51}the most often used indicators of chaotic flows. Let $x(t; x 0)$ denote a trajectory of system (1) starting from the initial condition $ x 0$ at $t=0$. An initial perturbation $\epsilon (0)= \epsilon 0$ to $ x 0$ will evolve along $x(t; x 0)$ as $ \epsilon t=\Phi (t; x 0) \epsilon 0$, where $\Phi (t; x 0)$ is the fundamental matrix solution of the linearized system (1) along the trajectory $x(t; x 0)$. The maximal exponential rate of expansion along $x(t; x 0)$ is then the maximal Lyapunov exponent (MLE)

As the maximum rate of separation of nearby trajectories, $ \lambda max( x 0)$ defines a local measure of predictability for a dynamical system. The Lyapunov exponents are independent of the initial condition $ x 0$ and depend only on the trajectory $x(t; x 0)$. If $x(t; x 0)$ is contained in a chaotic (or strange) attractor, then $ \lambda max( x 0)$ is constant for almost all trajectories along the attractor.^{52}

^{53}When a good linear fit to $\lambda (t; x 0, \u03f5 0)$ is not possible, the system is generally not chaotic.

^{54}While $\lambda (t; x 0, \u03f5 0)/t$ depends on the initial condition $ x 0$ and the perturbation $ \u03f5 0$, this ratio will converge to $ \lambda max( x 0)$ for generic choices of $ x 0$ and $ \u03f5 0$.

^{52}Based on this observation, we randomly pick two nearby initial conditions on chaotic attractors and use a linear fit to their temporal separation rate to estimate the Lyapunov exponent $ \lambda max$ of the attractor.

^{55}i.e.,

^{56,57}is also an invariant property of attractors and hence is often used for comparisons between the original and modeled chaotic attractors.

^{58,59}If a chaotic attractor contains the orbit $x(t)$, its probability measure $\sigma $ is defined as the asymptotic average of Dirac deltas along $x(t)$,

^{60}i.e.,

On a chaotic attractor, the invariant measure $\sigma $ is independent of the initial conditions. To approximate the probability density of the chaotic attractors, we will take a sampling time that is long enough for $\sigma $ to converge. Along each axis of the system, we can either count the number of data points between intervals and plot the histogram or estimate the probability distribution $\sigma $ based on a normal kernel function.^{22} In our examples, we will compare the probability density of the original data points with those of the predictions made by SSM-reduced model using the same kernel function. The closeness of the two plots indicates how statistically accurate our long-term SSM-based predictions are.

In summary, we will test SSM-based prediction quality for short times via trajectory error calculations and for long times via Lyapunov exponent and probability density distribution calculations.

## IV. APPLICATIONS TO CHAOTIC SYSTEMS

In this section, we perform SSM-based model reduction on four numerical data sets containing chaotic attractors.

In our first two examples, we observe only a scalar time series and delay-embed an SSM containing the attractor. In our subsequent examples, we assume that all phase space variables can be fully observed. For the first delay-embedded Lorenz attractor, we use a global polynomial representation for the reduced dynamics. For the other examples, we adopt the kNN nearest neighbor approach.

For autonomous ODE examples, the SSM-reduction outlined in Sec. II A will be directly applicable. We also explore one non-autonomous (time-periodic) ODE example in Sec. IV C, where we will carry out the discrete version of the same procedure for the Poincaré map of the system. Specifically, we perform model reduction on the Poincaré map, identifying an SSM emanating from its unstable fixed point.

We further include the example of the delay-embedded Rössler attractor in Sec. IV D, in which the SSM is attached to the unstable subspace corresponding to a complex conjugate pair of eigenvalues. The applications of SSM-based model reduction range from classic chaotic attractors to low-dimensional attractors in high-dimensional systems, such as the Kuramoto–Sivashinski equation in Sec. IV E and the finite element beam model in Sec. IV F.

### A. Delay-embedded Lorenz attractor

^{61}

with the classic parameter values $\sigma =10$, $\rho =28$, and $\beta =8/3$. This system has an unstable fixed point at $(x,y,z)=0$ and hence falls in the setting of this paper. We will only use a single scalar signal in our analysis and hence our methodology is fully unaware of the dimension and form of the ODE (26). As an SSM containing a chaotic attractor must be at least three-dimensional, Takens’s theorem guarantees success (with probability one) in embedding the SSM in an at least seven-dimensional space. We then seek to reconstruct the Lorenz attractor within the 3D SSM.

To compare the SSM-reduced model results with the broadly used SINDy algorithm mentioned in the Introduction, we will adopt the same setting as in the Matlab code given by Brunton *et al*.^{27} Accordingly, we will observe only the $x$ coordinate of system (26) to generate a scalar observable time series. We generate two trajectories (one for training and the other for testing) over the interval $(0,100)$ with a time step of $0.001$. The test data starts with the classic initial condition $ ( \u2212 8 , 8 , 27 ) T$, while the initial condition of the training data is slightly separated $ ( \u2212 8 , 8 , 27 + \u03f5 ) T$, where $\u03f5= 10 \u2212 9$. After truncating the data over the first short period of the time interval $(0,1)$, each of these two individual trajectories lies on the low-dimensional attractor of system (26).

We use FNN to estimate the dimension $d$ of the SSM that prevails in the data. The different choices of model reduction parameters and corresponding manifold fitting results are shown in Table I. The high FNN percentage, $31.4%$, in the first row of Table I indicates that the SSM containing the strange attractor has a dimension higher than two. This prompts us to select the SSM dimension as $d=3$, Based on the Takens embedding theorem, a 3D SSM can generically be delay-embedded in a $\rho =2\xd7d+1=7$ dimensional space. Table I shows the accuracy of SSM reconstruction in this setting under different polynomial approximation orders for the 3D SSM.

SSM dimension (d) | 2 | 3 | 3 | 3 |

FNN percentage | 31.4% | 0.0% | 0.0% | 0.0% |

Delay embedding dimension (ρ) | 5 | 7 | 7 | 7 |

Manifold expansion order ( $ K$) | … | 2 | 3 | 4 |

Invariance Error | … | 3.85% | 1.25% | 1.45% |

SSM dimension (d) | 2 | 3 | 3 | 3 |

FNN percentage | 31.4% | 0.0% | 0.0% | 0.0% |

Delay embedding dimension (ρ) | 5 | 7 | 7 | 7 |

Manifold expansion order ( $ K$) | … | 2 | 3 | 4 |

Invariance Error | … | 3.85% | 1.25% | 1.45% |

In delayed coordinates, the origin is a fixed point corresponding to the origin of system (26). We therefore seek a 3D SSM attached to the origin in the seven-dimensional embedding space and determine the reduced dynamics on that SSM. By computing the normalized invariance error defined in Eq. (16) and identifying its minimum, we choose the order $ K=3$ for the polynomial expansion for the SSM. We find that the 3D reduced dynamics on the SSM does admit a chaotic attractor, as shown in Fig. 1.

One might wonder what we have achieved here, given that system (27) is substantially more complicated than the original system (26) and has the same dimension as that system. One should not forget, however, that the input to our procedure was a single scalar observable and we used no information about the dimension or the form of the Lorenz system (26). The procedure we follow here is the same for any system that is only known from a scalar observable. This procedure will generally yield a smooth manifold, the SSM, that contains the attractor of the system.

With the SSM-reduced model, we can now make predictions for test trajectories not used in constructing this model, and compare it with predictions obtained from the SINDy algorithm. From the same training data, SINDy^{27} constructs a Hankel matrix by stacking 10 delayed time series as rows, and uses singular value decomposition (SVD) to select the first three dominant modes in these time-delay coordinates. Each column of the data is then trained with a different sparsification parameter ( $ \lambda SINDy=0.01,0.2,2$) to fit a third-order polynomial right-hand side of an ODE to the data.

Figure 2 shows the comparison of both short- and long-term predictions from SINDy and SSM reduction. Both methods are able to forecast for short times but our approach shows a more accurate result [Figs. 2(a) and 2(b)]. We then perform a Lyapunov analysis by plotting the separation of two trajectories against time in a log plot in Fig. 2(c). The predictions given by SSMLearn admit a clear linear fit which indicates our model is chaotic, while the trajectories reconstructed by SINDy are not chaotic. Indeed, as noted in Sec. 4.5 in the Appendix of Ref. 27, although the skeleton of the Lorenz attractor is captured by SINDy, the attractor is reconstructed as a non-chaotic, quasi-periodic orbit. In contrast, the Lyapunov exponent from the SSM-based model (27) is $0.89$, which is very close to the real value of $0.90$ computed directly from the Lorenz system (26).

### B. A nine-dimensional Lorenz model

The origin is a fixed point of system (29) with the eigenvalues listed in Table II. As in our previous example, no information about these eigenvalues will be used in our data-driven SSM reduction; we only list these eigenvalues for reference. Note that the 1:1 resonances between $ \lambda 2$ and $ \lambda 3$ and between $ \lambda 8$ and $ \lambda 9$ do not violate the non-resonance condition (3). There is also a 5th-order resonance between $ \lambda 4$, $ \lambda 5$, and $ \lambda 6$, which violate Eq. (3). However, as earlier work on SSMs shows,^{32,38} a polynomial expansion for an SSM, $ W(E)$, can still be computed as long as the linearized spectrum within $E$ does not contain $ \lambda 4$, $ \lambda 5$, and $ \lambda 6$. We will choose $E$ in this fashion.

λ_{1}
. | λ_{2}
. | λ_{3}
. | λ_{4}
. | λ_{5}
. | λ_{6}
. | λ_{7}
. | λ_{8}
. | λ_{9}
. |
---|---|---|---|---|---|---|---|---|

1.9263 | −0.2741 | −0.2741 | −0.5000 | −0.6667 | −2.6667 | −3.4263 | −4.7259 | −4.7259 |

λ_{1}
. | λ_{2}
. | λ_{3}
. | λ_{4}
. | λ_{5}
. | λ_{6}
. | λ_{7}
. | λ_{8}
. | λ_{9}
. |
---|---|---|---|---|---|---|---|---|

1.9263 | −0.2741 | −0.2741 | −0.5000 | −0.6667 | −2.6667 | −3.4263 | −4.7259 | −4.7259 |

As in the 3D Lorenz example, we observe only the $ C 1$ coordinate of system (29) as a scalar time series and delay-embed the signal. We generate two random initial conditions (one for training and the other for testing) within the unit sphere of the origin in the 9D phase space and observe the first coordinate $ C 1$. We then select and truncate the trajectories to make them lie close to one of the four symmetric strange attractors. Assuming no knowledge of the original system or the dimension of the chaotic attractor, we again use FNN to estimate the dimension $d$ of the SSM and pick $d=3$ according to the false neighbor percentage in Table III. By the Takens embedding theorem, we delay-embed this 3D SSM into a $\rho =2\xd7d+1=7$ dimensional space. The training and testing trajectories in this seven-dimensional delay-embedded space are shown in Fig. 3.

SSM dimension (d) | 2 | 3 | 3 | 3 | 3 | 3 | 3 |

FNN percentage | 41.6% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% |

Delay embedding dimension (ρ) | 5 | 7 | 7 | 7 | 7 | 7 | 7 |

Manifold expansion order ( $ K$) | … | 2 | 3 | 4 | 5 | 6 | 7 |

Invariance Error | … | 2.24% | 0.59% | 0.22% | 0.09% | 0.05% | 0.04% |

SSM dimension (d) | 2 | 3 | 3 | 3 | 3 | 3 | 3 |

FNN percentage | 41.6% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% |

Delay embedding dimension (ρ) | 5 | 7 | 7 | 7 | 7 | 7 | 7 |

Manifold expansion order ( $ K$) | … | 2 | 3 | 4 | 5 | 6 | 7 |

Invariance Error | … | 2.24% | 0.59% | 0.22% | 0.09% | 0.05% | 0.04% |

Using SSMLearn, we approximate a 3D SSM attached to the origin of the 7D delay-embedded space. We minimize the invariance error over different choices of SSM dimension and polynomial approximation orders. As shown in Table III, these considerations lead us to construct a 3D SSM via a sixth-order polynomial expansion. This construct produces an invariance error of order $0.05%$, which is low enough to guarantee high accuracy for our SSM-reduced model.

In the reduced coordinates, we find that a polynomial representation of the SSM-reduced dynamics fails to capture the complex dynamics with sufficient accuracy. Instead, we turn to the $k$-th nearest neighbor method discussed in Sec. III B. The number of neighbors in kNN algorithm is often chosen to be larger than the data dimension.^{23} Here, we select $k=4$ and perform the kNN method on a training set containing $1.5\xd7 10 6$ data points. Note here that the training data for SSM fitting ( $ 10 5$ points) is in general much less than the number of points required for reduced dynamics fitting. This is because the kNN prediction algorithm requires data to fully cover the attractor in its phase space. Figure 4(a) shows the prediction results from this nearest neighbor method. Since chaotic dynamics have sensitive dependence on initial conditions, the forecast is only accurate for 5.94 Lyapunov times.

We attempted to perform the SINDy algorithm on the same training data to identify a seven-dimensional SINDy model and compare it to SSM-reduced model prediction results. We tested several combinations of different model orders (from two to six) and sparsity parameters ( $ \lambda SINDy=0.0001$ to $0.1$). We also tried to follow the SINDy tutorial for^{63} to tune the hyperparameter $ \lambda SINDy$ by minimizing RMSE of predicted trajectories for different choices of $ \lambda SINDy$. However, the resulting SINDy models do not contain a global attractor in general, and their solutions tend to blow up after some time of integration. As an example, Fig. 4 shows results from a third-order SINDy model with $ \lambda SINDy=0.001$. One could also try to set different sparsity parameters for each coordinate, as for the delay-embedded Lorenz example in Ref. 27, but we did not pursue that exploration here.

To test the statistical properties of the SSM-reduced model, we further compute the NMTE and Lyapunov exponent of 200 pieces of the test trajectory whose initial conditions are not contained in the training data. After adding a small perturbation, we plot the deviation of the original and the predicted model trajectory against time on a log scale. For 200 trajectories of the original system (29), this analysis yields the Lyapunov exponent 0.032 as the mean of the observed separation exponents. Remarkably, this exponent is closely approximated by the value 0.033 yielded by our SSM-reduced model in Fig. 5(b).

### C. Duffing oscillator chain

^{64}given by

When written in first-order form, these equations define an eight-dimensional, non-autonomous dynamical system. For the autonomous part of the dynamics, the origin is an unstable fixed point. This unstable equilibrium has eight eigenvalues, as listed in Table IV. Again, these are listed only for reference and will not be used in our SSM construction.

λ_{1}
. | λ_{2}
. | λ_{3,4}
. | λ_{5,6}
. | λ_{7,8}
. |
---|---|---|---|---|

1.2204 | −5.8047 | −1.5713 ± 0.6269i | −3.6865 ± 4.0004i | −3.7500 ± 3.9922i |

λ_{1}
. | λ_{2}
. | λ_{3,4}
. | λ_{5,6}
. | λ_{7,8}
. |
---|---|---|---|---|

1.2204 | −5.8047 | −1.5713 ± 0.6269i | −3.6865 ± 4.0004i | −3.7500 ± 3.9922i |

When we add the periodic forcing term to the autonomous part of the equation, the unstable fixed point at the origin bifurcates into an unstable periodic orbit. As long as the amplitude of the time-dependent term in system (32) is moderate, the corresponding $T=\pi $-periodic Poincaré map will also have an unstable fixed point. This hyperbolic fixed point is the anchor point for our SSM computation. Our strategy will be to locate an SSM for this map that contains an observed chaotic attractor of the system. As in our previous examples, this SSM will act as an inertial manifold whose restricted dynamics provide a reduced model for the dynamics on and near the attractor. The corresponding geometry is illustrated in Fig. 7.

For the eight-dimensional chaotic system (32), we consider all eight degrees of freedom as observables. We select seven initial conditions (six for training and one for testing) randomly within the unit sphere around the origin and generate the corresponding trajectories for $ 10 5$ times the forcing period. We track the evolution of the origin under increasing forcing amplitude into a hyperbolic periodic orbit using the numerical continuation package COCO.^{65} In Fig. 8(a), we plot the coordinates $ x 3$ and $ x \u02d9 3$ for the original continuous dynamics in blue, and the unstable periodic orbit computed by COCO in red. When we pass to the Poincaré map, the continuous orbits become discrete dots in the Poincaré section as in Fig. 8(b). Based on the theory of mixed-mode SSMs,^{37} SSMLearn computes a polynomial expansion for an SSM attached to the hyperbolic fixed point [red dot in Fig. 8(b)]. Note that our construct relies on the existence of the hyperbolic fixed point, which we compute using COCO from system (32). However, even when the exact location of this anchor point is not known, SSMLearn can still provide an approximation for the SSM automatically by regressing a manifold (with the addition of constant terms in its parameterization) to the available data. The location of the fixed point itself will then follow as a prediction from the SSM-reduced dynamics.

As in the previous examples, we use the invariance error as a criterion for determining the correct SSM dimension. Table V shows different invariance errors obtained for different choices of SSM dimension and expansion order. Based on these results, we choose a 3D SSM construction via a seventh-order polynomial expansion.

SSM dimension (d)
. | 2D SSM . | 3D SSM . | ||||
---|---|---|---|---|---|---|

SSM expansion orders $( K)$ | 3 | 5 | 7 | 3 | 5 | 7 |

Invariance error | 8.26% | 4.24% | 1.89% | 0.061% | 0.005% | 0.0001% |

SSM dimension (d)
. | 2D SSM . | 3D SSM . | ||||
---|---|---|---|---|---|---|

SSM expansion orders $( K)$ | 3 | 5 | 7 | 3 | 5 | 7 |

Invariance error | 8.26% | 4.24% | 1.89% | 0.061% | 0.005% | 0.0001% |

After obtaining reduced coordinates on the 3D SSM, we again use the kNN algorithm discussed in Sec. III B to reconstruct the chaotic attractor of the system on this SSM. We set $k=6$ and use a training set with the total size of $6\xd7 10 5$ points. The prediction is generated by the 3D reduced model, then lifted to the eight-dimensional state space via the SSM parametrization. In Fig. 9, we visualize the test data (red) and its prediction (blue) by showing them in various combinations of three of the eight coordinates. From Fig. 9, we see that the red dots and the blue dots coincide in the 8D state space, and the shape of the chaotic attractor is accurately reconstructed. To verify the accurate reproduction of long-term statistical features of the SSM-reduced model, we further plot the probability distribution of the original and the reconstructed dynamics in all of the coordinates separately. From Fig. 10, the statistical properties of the reduced-order model are very close to those of the original attractor.

We also attempted to apply the map version of SINDy^{66} to the same eight-dimensional training data (six trajectories) for maps. However, after testing different combinations of various values of polynomial order (from three to six) and sparsity parameter ( $ \lambda SINDy=0.001$ to $0.05$), we failed to identify a feasible model using SINDy and the prediction results invariably blew up after some time. This is likely due to the high complexity of the Poincaré map dynamics.

### D. Delay-embedded Rössler attractor

^{67}

As in the delay-embedded Lorenz attractor example in Sec. IV A, we observe only the $x$ coordinate of system (33) to generate scalar observable time series. Then, using the information of the fixed point $ x 0$, we shift the origin of the system to this unstable fixed point. The training and test data are generated separately by the initial conditions $(1,1,1)$ and $(1,1,2)$ over the time interval $(0,7000)$ with a time step of $0.01$. After the first $(0,20)$ time interval, both the training and test trajectories lie very close to the Rössler attractor. Following the same procedure as in Sec. IV A, we use FNN to estimate the SSM dimension $d$ and pick the manifold expansion order according to the invariance error. We select the SSM attached to the unstable fixed point to be $d=3$ dimensional, the delay-embedding dimension to be $\rho =7$, and the polynomial expansion order to be $ K=4$. Namely, we delay-embed the scalar observable in 7D and use SSMLearn to perform model reduction onto a 3D reduced space. The shape of the chaotic attractor in three-dimensional reduced coordinates is shown in Fig. 11(a).

In the reduced coordinates, we use the kNN method to make forecasts for the test trajectory. We select $k=4$ and perform the kNN prediction method on a training set containing $7\xd7 10 5$ data points. From Fig. 11, the prediction stays accurate for around 3.3 Lyapunov times of system (33), and the reconstructed maximum Lyapunov exponent 0.0946 is close to the actual MLE of the system 0.0938.

As in the other examples, we also tried to apply SINDy to the same seven-dimensional training data. We tested different combinations of SINDy parameters (polynomial order ranging from three to five, and the sparsity parameter $ \lambda SINDy$ ranging from $0.001$ to $0.1$) but failed to identify a feasible model. The prediction results generated by these SINDy models generally blew up, i.e., the models do not contain a stable attractor. This may be because the dynamics in these delay-embedded coordinates do not have a sparse polynomial form in general.

### E. Kuramoto–Sivashinsky equation

^{68,69}This equation is often used as a simplified model to study spatiotemporal chaos and pattern formation in fluid dynamics, particularly in the context of understanding turbulence. Its dynamics in one spatial dimension is given by the PDE

To generate a data set for the discovery of SSMs, we adapt the Matlab code from Ref. 68, which uses a spectral solver for numerical solutions of the Kuramoto–Sivashinsky equation. We consider the system size $L=22$, for which a truncation of the number of Fourier modes to the range $16\u2264N\u2264128$ yields accurate solutions for this system size,^{69} therefore, we pick $N=64$. We discretize $L$ into 1024 segments, generate trajectories with the time step $0.25$, and discard the first 100 time units to get trajectories close to the attractor. The training set which is generated for the time length $1.25\xd7 10 5$ contains approximately $5\xd7 10 5$ data points. In this example, we only test the short-term predictability of the SSM-reduced model and therefore generate a short test set for a $100$ time interval.

For SSMLearn, computing the parametrization of an SSM requires solving the implicit minimization in Eq. (15), which can be computationally demanding for high-dimensional systems. Therefore, we turn to the fastSSM algorithm,^{70} which significantly reduces the computational cost when the observable space dimension is high. The fastSSM algorithm calculates the tangent space of the SSM by singular value decomposition and the manifold parametrization via explicit polynomial regression. As a result, the computational cost scales linearly with the input data dimension. We refer to Ref. 70 for a detailed discussion.

To estimate the SSM dimension from data, we plot the manifold fitting error as a function of the SSM dimension and SSM polynomial expansion order as shown in Fig. 12. When we increase the SSM dimension to eight, there is a significant drop in manifold fitting error. This indicates the chaotic attractor of the system can be captured by an inertial manifold with a dimensionality of at least eight. Therefore, we fit an eight-dimensional SSM with a third order polynomial expansion to the 1024-dimensional trajectory data. The invariance error of the manifold fitting is $0.35%$ for the test data. The estimated dimension $d=8$ agrees with the estimations in Refs. 71 and 69 and other model reduction results on inertial manifolds.^{24,72}

After obtaining the data in 8D reduced coordinates, we perform the kNN algorithm with $k=9$ to make forecasts for the test trajectory. The forecasting quality and the validity range of each prediction depend on the amount of training data used and the individual initial condition of the test trajectory. We only illustrate one of the SSM prediction results in Fig. 13. In general, the normalized trajectory errors reach 0.5 between 10 and 60 time intervals. A detailed analysis of Lyapunov exponents and average predictability for this infinite-dimensional example is beyond the scope of the present paper.

### F. Periodically forced finite-element beam

In our final and most complex example, we use data-driven model reduction to a mixed-mode SSM to identify a low-order model for the chaotic attractor of a periodically forced buckling beam. The nonlinear finite-element model we use for a von Kármán beam is a second-order approximation of the kinematics of a beam, taking into account the deformation of the cross-section for both bending and stretching. We specifically adopt here the finite-element Matlab code developed in Ref. 73 and the same buckling beam setting as in Ref. 37. The boundary condition of the beam is set as pinned–pinned, as shown in Fig. 14, and an axial compressive force is applied to the right node.

The finite-element model of the von Kármán beam consists of 13 nodes, each with three degrees of freedom (DOFs): axial direction, transverse direction, and rotation. Under the above boundary conditions, the beam model has 12 elements and 36 degrees of freedom. With the displacement and velocity of each DOF counted, the full model is a 72-dimensional dynamical system in its phase space. We adopt the physical parameters listed in Table VI from Ref. 73.

Symbol . | Meaning . | Value . | [Unit] . |
---|---|---|---|

t | Time | … | [s] |

L | Length of the beam | 2 | [m] |

h | Height of the beam | 1 × 10^{−2} | [m] |

b | Width of the beam | 5 × 10^{−2} | [m] |

A | Width of the beam | 5 × 10^{−4} | [m^{2}] |

E | Young’s modulus | 190 × 10^{9} | [Pa] |

κ | Viscous damping rate of material | 7 × 10^{6} | [Pa⋅s] |

ρ | Density | 7850 | [kg/m^{3}] |

Symbol . | Meaning . | Value . | [Unit] . |
---|---|---|---|

t | Time | … | [s] |

L | Length of the beam | 2 | [m] |

h | Height of the beam | 1 × 10^{−2} | [m] |

b | Width of the beam | 5 × 10^{−2} | [m] |

A | Width of the beam | 5 × 10^{−4} | [m^{2}] |

E | Young’s modulus | 190 × 10^{9} | [Pa] |

κ | Viscous damping rate of material | 7 × 10^{6} | [Pa⋅s] |

ρ | Density | 7850 | [kg/m^{3}] |

For the buckling force value $F=1.1 F buckling$, the system has three fixed points: one unstable fixed point corresponding to purely axial displacement and two stable fixed points corresponding to the upper and lower buckled states. The first 10 eigenvalues of the unstable fixed points are listed in Table VII. Therefore, each unstable steady state has one pair of real eigenvalues $11.06$ and $\u221211.10$, the other pairs correspond to stable spirals. The positive real eigenvalue $11.06$ has the physical meaning of the beam leaving the unstable fixed point and approaching the upper or lower buckled states, and the negative real eigenvalue $\u221211.10$ corresponds to a trajectory settling down at the unstable state. The other complex pairs of eigenvalues represent the dynamics of higher modes. According to the mixed-mode SSM theory,^{37} there exists a two-dimensional SSM tangent to the spectral subspaces corresponding to the two real eigenvalues. As in our earlier examples, these eigenvalues will not be used in our SSM construction.

λ_{1}
. | λ_{2}
. | λ_{3,4}
. | λ_{5,6}
. | λ_{7,8}
. | λ_{9,10}
. |
---|---|---|---|---|---|

11.06 | −11.10 | −0.36 ± 119.36i | −1.83 ± 295.56i | −5.80 ± 541.50i | −14.19 ± 858.19i |

λ_{1}
. | λ_{2}
. | λ_{3,4}
. | λ_{5,6}
. | λ_{7,8}
. | λ_{9,10}
. |
---|---|---|---|---|---|

11.06 | −11.10 | −0.36 ± 119.36i | −1.83 ± 295.56i | −5.80 ± 541.50i | −14.19 ± 858.19i |

We generate four training trajectories from the autonomous beam model (without the vertical forcing shown in Fig. 14) for the non-dimensionalized time interval $(0,7.89)$, which covers $150$ times the oscillating period of the slowest mode. Each trajectory contains $1.5\xd7 10 4$ points. The initial conditions are generated either by adding a small perturbation to the unstable stationary state, or by applying a large transverse force ( $ 10 4$ N) at the middle node. After applying this force, the beam is further down than the lower buckled state and will first oscillate around the two stable fixed points with a large amplitude, then converge to one of them. The higher modes corresponding to the complex eigenvalue pairs die out, and the trajectories converge to a 2D mixed-mode SSM, which we locate from the training trajectories using SSMLearn. Converged portions of two training trajectories and the SSM are shown in Fig. 15. Based on invariance error and NMTE for non-chaotic test data (one trajectory) in Table VIII, we construct a seventh-order SSM expansion and a 9th-order polynomial representation of the reduced dynamics in SSMLearn.

Manifold expansion order $( K)$ | 3 | 4 | 5 | 6 | 7 |

Invariance error | 0.10% | 0.10% | 0.14% | 0.14% | 0.097% |

Reduced model order (SSM expansion order $ K=7$) | 7 | 8 | 9 | 10 | 11 |

NMTE for non-chaotic test data | 0.24% | 0.24% | 0.12% | 0.12% | 0.16% |

Manifold expansion order $( K)$ | 3 | 4 | 5 | 6 | 7 |

Invariance error | 0.10% | 0.10% | 0.14% | 0.14% | 0.097% |

Reduced model order (SSM expansion order $ K=7$) | 7 | 8 | 9 | 10 | 11 |

NMTE for non-chaotic test data | 0.24% | 0.24% | 0.12% | 0.12% | 0.16% |

Next, we apply transverse periodic forcing on the middle node of the beam, as shown in Fig. 14, which makes the beam oscillate chaotically. For moderate forcing amplitudes, the autonomous SSM, $ W(E)$, is known to persist in the form of a nearby, time-periodic SSM, $ W(E,t)$.^{37} The leading-order forced dynamics on $ W(E,t)$ can be obtained by simply adding to the reduced dynamics of $ W(E)$, the projection of the time-periodic forcing on $E$.^{37} This fact enables one to make predictions for the forced response based solely on a model trained on unforced data.

This procedure is implemented in SSMLearn^{38,44} and will be discussed here. The periodic forcing in Fig. 14 has the amplitude $ F 0=3.4$ N and the frequency $\Omega =15.4$ rad/s. We then generate the chaotic response of the buckling beam to this periodic force, and this one chaotic trajectory as the test data for our reduced-order modeling. Figure 16 shows this test trajectory lies entirely on the 2D SSM and can be projected onto the two-dimensional reduced coordinates.

^{37}

After calibrating the forcing amplitude $A$ and forcing phase $\varphi $ to a single forced trajectory (see Ref. 38 for details), we find that the SSM-reduced model computed from the autonomous system is indeed able to forecast the chaotic forced response, as shown in Fig. 17. The Lyapunov exponent of the reconstructed system $1.732$ is very close to its counterpart $1.742$ calculated for the full, 72-dimensional system. Thus, the statistical properties of the chaotic attractor are successfully predicted by an SSM-reduced model in this example too.

Even though a short-term prediction shown in Fig. 17(a) quickly loses its accuracy, one should not forget: This prediction by the 2D, SSM-reduced model is made entirely based on unforced, non-chaotic trajectories and yet it does approximate both the geometry and the long-term statistics (as described by the maximal Lyapunov exponent) with notable accuracy for the chaotic attractor of the full, 72-dimensional system. This shows the high physical interpretability of SSM-reduced models, which other data-driven model reduction techniques generally do not provide. For instance, the SINDy algorithm we used in some of our previous examples was not realistic to run on this high-dimensional example and the algorithm would not be equipped to make forced predictions from unforced training data.

## V. CONCLUSIONS

Using the recent theory of mixed-mode spectral submanifolds,^{37} we have developed a methodology to derive SSM-reduced models for dynamical systems with chaotic attractors. Specifically, we combined the SSMLearn algorithm with the method of nearest neighbors to obtain an accurate enough representation of the SSM-reduced dynamics on large enough domains that capture the attractors of the system. In order to contain the chaotic attractor and the dominant behavior of the underlying system, the SSM must have a dimension higher than the attractor dimension. In a strictly data-driven setting, the SSM dimension can be approximated using the false nearest neighbor method, then further determined by computing the invariance error.

This methodology is generally applicable to systems in which an inertial manifold containing the chaotic attractor is, in fact, an SSM emanating from an unstable steady state. In the examples treated here, that steady state was either a fixed point or a periodic orbit. Building the inertial manifold specifically as a spectral submanifold enables us to construct mathematically justified, low-dimensional models for the dynamics on and near the attractors. The models are either explicit polynomial ODEs or ODEs with right-hand sides interpolated using nearest neighbor points on the SSM. The core part of our approach is the open-source SSMLearn algorithm,^{44} a general Matlab (Python) package for SSM-based model reduction. We additionally deploy the false nearest neighbor (FNN) algorithm^{74} to identify the optimal SSM dimension from data. For SSM-based dimension reduction, no information about the time derivative data is required, and the amount of data needed is relatively low.

We have applied two methods to model the reduced dynamics: a global polynomial fit and the kNN method. Both methods have limitations. The global polynomial fit requires computing the time derivatives of the trajectories and can become unstable for complex systems. The kNN method only uses local information to make predictions and therefore requires a large amount of data that densely covers the whole attractor. To counter these challenges, it is possible to combine SSM-model reduction with other prediction methods, provide the weighting functions,^{75,76} use other function bases such as radial basis functions,^{77} or employ machine learning and deep learning methods^{78,79} such as neural ODEs.^{24}

Our SSM-based attractor models have yielded accurate local predictions over short Lyapunov times and also reproduced closely the statistical and ergodic properties of chaotic attractors over longer times. Examples supporting these conclusions included the classic Lorenz and Rössler system, the extended 9D Lorenz model, a forced oscillator chain, the Kuramoto–Sivashinsky equation, and a periodically forced buckling beam.

We also compare our approach with the data-driven nonlinear modeling method SINDy.^{27} While the advantages of SINDy include simplicity, ease of implementation, and versatility, the method scales unfavorably with the input data dimension, and depends on the choice of the function library and the coordinates, as discussed in Sec. IV D. In efforts to address these challenges, SINDy has also been combined with various coordinate-identifying methods and model reduction techniques, such as learning appropriate observables in an assumed latent space. For SSM-based model reductions, the existence of such low-dimensional invariant manifolds is theoretically justified in the presence of a hyperbolic steady state. The requirement of such a steady state is a limitation of our method, but in our view, it is necessary for the localization of invariant manifolds from data to be on firm theoretical ground.

Importantly, since SSM-reduced models are fully dynamics-based, they can predict forced chaotic responses based solely on the knowledge of unforced (non-chaotic) trajectory data as demonstrated by our forced buckling beam example. We are unaware of any other data-driven model reduction approach that has this capacity.

Future applications of our methodology will include other PDE models, such as the 2D Kolmogorov flow,^{80,81} which also has finite-dimensional attractors and coexisting unstable steady states. Further extensions of mixed-mode SSM theory to more general anchoring steady states (such as invariant tori and other bounded invariant sets) are needed for the present methodology to become applicable to inertial manifolds that emanate from invariant sets other than fixed points or periodic orbits.

## ACKNOWLEDGMENTS

We acknowledge helpful discussions with Mattia Cenedese, Mingwu Li, and Shobhit Jain. A.L. was supported by the Swiss National Foundation (SNF) Grant No. 200021_214908.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### Author Contributions

**Aihui Liu:** Formal analysis (lead); Software (lead); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). **Joar Axås:** Conceptualization (equal); Formal analysis (equal); Supervision (equal); Validation (equal); Writing – review & editing (equal). **George Haller:** Conceptualization (lead); Funding acquisition (lead); Methodology (equal); Project administration (lead); Resources (lead); Supervision (lead); Writing – review & editing (lead).

## DATA AVAILABILITY

The code used to generate the numerical results included in this paper is available as part of the open-source Matlab package SSMLearn at https://github.com/haller-group/SSMLearn, Ref. 44.

## REFERENCES

*Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and Engineering*

*Chaotic Dynamics: An Introduction*

*An Introduction to Chaotic Dynamical Systems*

*Lecture Notes in Mathematics*(Springer, Berlin, 1981), pp. 366–381.

*Deep Learning in Multi-step Prediction of Chaotic Dynamics*

*Analysis of Observed Chaotic Data*

*2023 IEEE Int. Conf. Robot. Autom.*(IEEE, 2022), pp. 2627–2633.

*2022 IEEE 61st Conference on Decision and Control (CDC)*(IEEE, 2022), pp. 6548–6555.

*Cambridge Tracts in Mathematics*(Cambridge University Press, 2010).

*Nonlinear Time Series Analysis*

*Statistics for the Environment*

*Ensembles of Nearest Neighbor Forecasts*

*Nonlinear Dynamics in Economics: A Theoretical and Statistical Approach to Agricultural Markets*(Springer, Berlin, 1995), pp. 122–147.

*Chaotic Dynamics: An Introduction Based on Classical Mechanics*

*Wiley Encyclopedia of Biomedical Engineering*(John Wiley & Sons, Ltd, 2006).

*Cambridge Nonlinear Science Series*(Cambridge University Press, 1998).

*Lezioni Lincee*(Cambridge University Press, 1989).

*The Duffing Equation*(John Wiley & Sons, Ltd, 2011), Chap. 1, pp. 1–23.

*Chaos: Classical and Quantum*(Niels Bohr Inst., Copenhagen, 2016), Chap. 30.

*Special Topics in Information Technology*, edited by L. Piroddi (Springer International Publishing, Cham, 2022), pp. 3–14.