In the present effort, a data-driven modeling approach is undertaken to forecast aperiodic responses of non-autonomous systems. As a representative non-autonomous system, a harmonically forced Duffing oscillator is considered. Along with it, an experimental prototype of a Duffing oscillator is studied. Data corresponding to chaotic motions are obtained through simulations of forced oscillators with hardening and softening characteristics and experiments with a bistable oscillator. Portions of these datasets are used to train a neural machine and make response predictions and forecasts for motions on the corresponding attractors. The neural machine is constructed by using a deep recurrent neural network architecture. The experiments conducted with the different numerical and experimental chaotic time-series data confirm the effectiveness of the constructed neural network for the forecasting of non-autonomous system responses.

By using a deep recurrent neural network, which consists of Long Short Term Memory (LSTM) cells, a unique inhibitor mechanism, an encoder, a decoder, and hidden layers, a neural machine is constructed to forecast the temporal evolution of a non-autonomous system, here, a harmonically forced Duffing oscillator. Simulation data and experimental data are used for the training of the neural machine, and the trained network is used for response forecasting on chaotic attractors. Given, the fundamental nature of this effort, it is believed that this work will serve as an important basis for forecasting of responses of non-autonomous systems, which are relevant for many industrial applications.

Over the course of the last 50 plus years, chaotic behavior has fascinated and stirred up the genuine interest of researchers across the globe. This aperiodic behavior can be beneficial as observed in the context of several systems, be it mechanical systems (e.g., thermal pulse combustor1,2) or non-mechanical systems (e.g., living organisms3 and biological disorder4). There continues to be an enormous interest in predicting the behavior of complex, chaotic dynamical systems, wherein an initial misjudgment or error in the state of the system can grow exponentially in time.5–7 Even a smallest disturbance to a chaotic system can initiate a concatenation of events that results in a dramatically divergent future system state. Given that an initial condition may not always be accurately known and system models are not perfect, response forecasting for a chaotic system remains a challenge.

In recent work of the authors’ group, a neural network has been used to describe the evolutions of three different chaotic systems, two of which are governed by ordinary differential equations (ODEs) and the other is governed by partial differential equation (PDE).8 The group has constructed a neural machine by using a deep recurrent neural network, which consists of an encoder, a decoder, hidden layers, and an inhibitor mechanism, which was used to help prolong the forecast horizon. The authors have illustrated the long-term prediction capability of this machine for several autonomous chaotic systems, including the Lorenz’63 system, the Lorenz’96 system,9 and the Kuramoto–Sivashinsky system.10 Similarly, recurrent neural networks have been applied to high-dimensional chaotic systems for forecasting of the corresponding system behaviors.11 The data-driven methods are found to excel than other traditional methods (e.g., Gaussian process based methods or its variants) in terms of accuracy and adaptability. Reservoir computing approach is another approach that can be used to predict the responses of spatiotemporally chaotic systems purely based on the past observations of the system states.12 However, these approaches have not been applied for prediction and forecasting of non-autonomous system responses. This is explored in the current work.

There are an unfathomable number of systems in nature that depend explicitly on time. Since the discovery of the expanding cosmos13 and Big Bang,14 it is known that the universe itself is a time-dependent system. Probably, the most notable time-dependent systems are the living creatures. The time effects can be observed through, for example, circadian rhythms, where the fluctuations of physical process are synchronized with the diurnal cycle.15 On a larger time-scale, every living creature undergoes a time-dependent process, called aging. Apart from living systems, the time dependence of dynamics is observed ubiquitously in nature. Highly transient events, such as rogue waves,6 stock market crash, tornadoes, and so on often occur causing significant impacts. In network theory for complex systems, one considers each individual element as being time dependent to study how the local interactions can lead to large-scale synchronizations. The prior mentioned systems are governed by differential equation models, in which there are explicit time dependent terms. Despite the prevalence of time-dependent dynamics in nature, there has been only limited research done on the prediction and analysis of temporal responses of non-autonomous systems. To address this, here, a neural machine is constructed for forecasting temporal evolution of the chaotic response of a forced, Duffing oscillator, which has a linear stiffness term and a nonlinear (cubic) stiffness term. This oscillator, which has been the subject of experimental and numerical investigations over a wide range of system parameter values, is known to have different response characteristics, depending on the nature of the physical system being studied.16–22 In the present efforts, different numerical and experimental chaotic time-series data are considered to illustrate the effectiveness of the constructed neural network.

The rest of this work is organized as follows. In Sec. II, the experimental arrangement is described. In Sec. III, the modeling efforts undertaken are presented. Results obtained toward forecasting of chaotic responses are detailed in Sec. IV. Finally, concluding remarks are drawn together and presented in Sec. V.

As shown in Fig. 1, the experimental Duffing oscillator prototype consists of a cantilever steel structure with an attached tip mass magnet at its free end.23,24 The tip mass magnet is located in the magnetic field of another magnet that is fixed in a position close to it. The inter-magnet separation is varied and the magnet orientations are reversed to realize a representation of a nonlinear Duffing oscillator with either a hardening or a softening (monostable or bistable) nonlinearity. The other end of this system is excited through an electromagnetic shaker that is used to provide a harmonic excitation. The excitation provided by the shaker is along a direction normal to the longitudinal axis of the cantilever beam oscillator allowing for excitation of bending motions of the structure. Given the cantilever structure’s orientation, for the purpose of the current experimental arrangement, the influence of gravity is neglected in modeling the structural dynamics. The effects of the magnets are captured through system identification, as in prior work.23,24

FIG. 1.

(a) Experimental arrangement. (b) Duffing oscillator schematic. The separation between the magnets and their relative orientations are varied. An electromagnetic shaker is used to provide a harmonic excitation. (a) Experimental arrangement.24 (b) Realization of forced, Duffing oscillator.24 

FIG. 1.

(a) Experimental arrangement. (b) Duffing oscillator schematic. The separation between the magnets and their relative orientations are varied. An electromagnetic shaker is used to provide a harmonic excitation. (a) Experimental arrangement.24 (b) Realization of forced, Duffing oscillator.24 

Close modal

Several means were used to gather experimental data. The free-end displacement of the structure is measured by using a strain gauge, which is secured close to the base of the cantilever structure, and the NI cDAQ-9178 with an NI 9235 module. The harmonic excitation amplitude is measured by using a 3-axis accelerometer (SparkFun ADXL337) that is attached to the shaker head at the base of the cantilever structure. A LabVIEW® program was developed to provide the deterministic harmonic excitation input to the Brüel & Kjær electromagnetic shaker through NI modules. The same LabVIEW® program is also used for real time data acquisition of the strain gauge and accelerometer signals. The natural frequencies of the system depend on the inter-magnet separation and relative orientations of the magnets.

As mentioned earlier, both hardening and softening characteristic of the nonlinear systems were studied in the experiments. The experimental arrangement is noted to be quite sensitive to the relative spacing of the magnets. When both the magnets repel each other, the system behaves as bistable, softening the nonlinear oscillator with two stable potential wells, as the zero tip deflection position is unstable. On the other hand, when the magnets attract each other, the system behaves as monostable, nonlinear oscillator with hardening or softening characteristic and the zero tip deflection position is stable. For the purpose of this article, the authors focused on a bistable Duffing oscillator system with softening characteristics. For all of the experimental studies, attention was on chaotic responses and associated data.

In this section, the authors follow their earlier work8 to briefly introduce the data-driven methodology pursued. Here, the focus is on utilizing response data to predict the future behavior of the considered system, in contrast to physics-based modeling wherein one solves the problem from first principles. With a data-driven approach, one can relax the need to have/develop different models for different physical systems and can work with a surrogate model to examine the underlying physics. There are many types of data-driven approaches, which are based on neural networks, support vector machines, decision tree, genetic algorithms, and so on. Here, the authors build a neural machine as the surrogate model to predict and forecast chaotic dynamics of the non-autonomous systems.

In general, a data-driven approach to forecasting the responses of dynamical systems can be divided into two stages, namely, the training stage and the inference stage. For forecasting the type of problems, recurrent neural networks are commonly used, given their capacity to deal with temporal correlations. For the training stage, optimization algorithms, like stochastic gradient descent, are applied to adjust surrogate model parameters in order to fit the training dataset. Furthermore, in the inference stage, the trained surrogate model is used with a new dataset that the model has not been exposed to previously, in order to test this model’s inference capacity.

A typical architecture of deep neural networks can be stacked with three components: an encoder, a decoder, and hidden layers in between them, as shown in Fig. 2. In the group’s prior work, in addition, an additional component called the inhibitor was introduced as shown in this figure. The encoder is used to take the past observations of the system as an input and generate the context vector e. The decoder is used to decode the highly conceptualized e and unroll the information on system response across future time horizons. The unique inhibitor component is used to inhibit unbounded error growth that may arise due to minuscule misrepresentation in the initial condition at time step n. In computational experiments conducted with the Lorenz and other chaotic systems, the group has used this inhibitor mechanism to realize superior forecasting capabilities.8 

FIG. 2.

Encoder–decoder–hidden space architecture with an inhibitor.8 Long short term memory (LSTM) cells are used in the encoder and decoder.

FIG. 2.

Encoder–decoder–hidden space architecture with an inhibitor.8 Long short term memory (LSTM) cells are used in the encoder and decoder.

Close modal

It is helpful to understand the forecasting problem from a probabilistic perspective. Consider the following probability for the occurrence of Y after X, that is, P(Y|X), where X={x1,,xp},xiRm precedes Y={y1,y2,,yq},yiRm. If both X and Y are generated from a dynamical system with certain initial conditions, then, the probability P(Y|X)1.

As mentioned earlier, the authors use a deep recurrent neural network, parametrized by θ as the surrogate model to determine the conditional probability Pm(Y|X;θ), which is an approximation to the true but unknown distribution Pd(Y|X). It is noted that θ consists of the neural weight matrices and bias vectors. The estimation of θ can be accomplished through the following optimization problem:8 

θ^=argminθEY|XP~dlogPm(Y|X;θ).
(1)

Similar to the group’s earlier work,8 the authors use an encoder–decoder–hidden layers-inhibitor type of the multi-layer stacked neural network to forecast the dynamics of a forced Duffing oscillator. Given a non-autonomous system’s explicit dependence on time, the authors increase the input signal dimension m and reduce p and q, the lengths of X and Y, respectively. To further illustrate this point, the authors have assumed that before the modification, the signal is of dimension m1 and the lengths of X and Y are, respectively, p1 and q1. After considering the explicit effect of time on the dynamics of non-autonomous system, the new dimension m2 and lengths p2 and q2 are as follows:

{m1m2,p1p2,q1q2,m1p1=m2p2,m1q1=m2q2.
(2)

According to the Takens’ embedding Theorem, the reconstructed dynamical system with an enlarged dimension m has the same dynamical characteristics as the original system, provided some conditions are met. For the forced Duffing oscillator considered here, the authors do observe a performance boost of the neural machine in terms of forecasting the future behavior with the inhibitor. During the implementation, the authors choose the Laplace type of loss function, as specified in Ref. 8. This is given by

L(x,y,θ)=k=1r|y^kyk|,
(3)

resulting in the estimator for θ as

θ^=argmaxθk=1r|G(xk;θ)yk|β,
(4)
=argminθk=1r|y^kyk|.
(5)

The Laplace loss function is used for the training of the neural network and the goal is to drive the error to 0. The different neural network parameters used for the different response forecasting studies are provided in Sec. IV along with the respective results. As presented in Sec. IV, the dimensions of the neural networks used with the experimental data and simulation data are different. In each case, the authors have carried out extensive hyperparameter tuning, including the dimension of the associated neural network, to push the neural machine’s performance and realize a long forecasting horizon in terms of the Lyapunov time.

As mentioned earlier, numerical and experimental datasets from the forced Duffing oscillator system are used to demonstrate the forecasting ability of the proposed neural machine for non-autonomous systems. It is noted that for the parameter values used in the experiments and simulations, all of the chosen initial conditions led to a chaotic attractor for which relevant data were collected. The results and discussions are presented next.

The equation of motion of a forced Duffing oscillator with a mass m, a viscous damping c, a linear stiffness k1, a nonlinear stiffness k3, a forcing amplitude F, and a forcing frequency ω can be nondimensionalized as described in authors’ prior work.23–25 The nondimensional form of the equation of the motion may be written as

x¨+δx˙+βx+αx3=γcos(ωt),
(6)

where the different nondimensional parameters are as follows: δ is the viscous damping, β is the linear stiffness parameter, α is the scaled nonlinear (cubical) stiffness, γ is the scaled forcing amplitude, ω is the forcing frequency , and t represents the nondimensional time. The harmonic excitation is given by γcos(ωt). Different signs of stiffness parameters β and α correspond to different Duffing oscillator characteristics, as presented in Table I.

TABLE I.

Stiffness parameters and different Duffing oscillator realizations.

Linear stiff.Cubic stiff.Oscillator characteristic
β > 0 α > 0 Monostable, hardening 
β > 0 α < 0 Monostable, softening 
β < 0 α > 0 Bistable, softening 
Linear stiff.Cubic stiff.Oscillator characteristic
β > 0 α > 0 Monostable, hardening 
β > 0 α < 0 Monostable, softening 
β < 0 α > 0 Bistable, softening 

In the current study, the authors have considered both a monostable, Duffing oscillator with hardening characteristics and a bistable, Duffing oscillator with softening characteristics. The equation given by Eq. (6) can be rewritten into the state-space form as

x1˙=x2,x2˙=δx2βx1αx13+γcos(ωt),
(7)

where x1=x and x2=x˙.

The numerical time series data are obtained by solving the state-space form given by Eq. (7) and considering different initial conditions. The obtained time series response is used as input for the neural machine and the output is the response prediction for future times. For the numerical simulations, the parameters α,β,δ,γ, and ω have been chosen such that the system exhibits a chaotic response.

For parameter values α>0 and β<0, as mentioned in Table I, the system behaves as a bistable, Duffing oscillator with softening characteristics. For the first numerical experiment, the parameter values of α=0.2,β=0.5,δ=0.085, and ω=0.42 are fixed in the numerical studies. After substituting these parameter values into Eq. (6), the resulting nondimensional form of the Duffing equation reads as follows:

x¨+0.085x˙0.5x+0.2x3=γcos(0.42t).
(8)

Here, the parameter γ is chosen as a control parameter and varied over a sufficient range to observe the chaotic response. The numerically obtained bifurcation diagram is shown in Fig. 3(a). The presence of a positive maximal Lyapunov exponent value λM is used to confirm that the system response is chaotic. In the current study, the authors have studied two cases with different γ values, and the corresponding results are shown in Fig. 3(b) and 3(c) to generate which the forcing frequency is used as the strobe frequency.

FIG. 3.

Plots of numerically obtained results for a softening Duffing oscillator given by Eq. (8). In plot (a), the authors show a bifurcation diagram with respect to the forcing amplitude γ. Plots (b) and (c) have the corresponding stroboscopic maps obtained for γ=0.5 and γ=1.7, respectively. (a) Bifurcation diagram on a Poincaré section constructed using the Forcing frequency. (b) Stroboscopic map (γ=0.5). (c) Stroboscopic map (γ=1.7).

FIG. 3.

Plots of numerically obtained results for a softening Duffing oscillator given by Eq. (8). In plot (a), the authors show a bifurcation diagram with respect to the forcing amplitude γ. Plots (b) and (c) have the corresponding stroboscopic maps obtained for γ=0.5 and γ=1.7, respectively. (a) Bifurcation diagram on a Poincaré section constructed using the Forcing frequency. (b) Stroboscopic map (γ=0.5). (c) Stroboscopic map (γ=1.7).

Close modal

1. Response forecasting with numerical data: Forcing amplitude γ = 0.5

For γ=0.5, the maximal Lyapunov exponent value is found to be λM=0.0479>0, which confirms chaotic response. The associated stroboscopic map obtained by using the forcing frequency is shown in Fig. 3(b).

A non-dimensional Lyapunov time λMt is used to study the prediction horizon of the neural machine. In the present study, three different initial conditions are chosen and each initial condition results in a different time response. The results are shown in Fig. 4(a)4(c). The input time series fed into the neural machine is shown in black color, and the ground truth data are shown in red. The blue dots are the generated by the neural machine. The neural machine is used to predict the future time response regardless of the initial condition on the chaotic attractor. Even, with a short window time histories an input, the constructed neural machine can be used to predict/forecast responses over long time windows.

FIG. 4.

Bistable, softening forced Duffing oscillator with γ=0.5 and γ=1.7. For each γ value, three different response data are presented. In plots (a)–(c), the authors show the prediction results from the neural machine for γ=0.5. Plots (d)–(f) have the prediction results from the neural machine for γ=1.7. The black curves represent the input data to the neural machine, and the blue dots are the prediction results generated from the neural machine. The red curves represent the ground truth data obtained by numerically integrating the governing system. The ground truth data are plotted with the predicted/forecasted results in blue for the sake of comparison. Neural network parameters: batch size:128, learning rate: 0.001, LSTM dimension: 128, number of stacked layers for both the encoder and decoder: 2, keep rate: 0.95, number of inhibitors: 2. (a) Prediction case 1 (γ=0.5). (b) Prediction case 2 (γ=0.5). (c) Prediction case 3 (γ=0.5). (d) Prediction case 1 (γ=1.7). (e) Prediction case 2 (γ=1.7). (f) Prediction case 3 (γ=1.7).

FIG. 4.

Bistable, softening forced Duffing oscillator with γ=0.5 and γ=1.7. For each γ value, three different response data are presented. In plots (a)–(c), the authors show the prediction results from the neural machine for γ=0.5. Plots (d)–(f) have the prediction results from the neural machine for γ=1.7. The black curves represent the input data to the neural machine, and the blue dots are the prediction results generated from the neural machine. The red curves represent the ground truth data obtained by numerically integrating the governing system. The ground truth data are plotted with the predicted/forecasted results in blue for the sake of comparison. Neural network parameters: batch size:128, learning rate: 0.001, LSTM dimension: 128, number of stacked layers for both the encoder and decoder: 2, keep rate: 0.95, number of inhibitors: 2. (a) Prediction case 1 (γ=0.5). (b) Prediction case 2 (γ=0.5). (c) Prediction case 3 (γ=0.5). (d) Prediction case 1 (γ=1.7). (e) Prediction case 2 (γ=1.7). (f) Prediction case 3 (γ=1.7).

Close modal

2. Response forecasting with numerical data: Forcing amplitude γ = 1.7

In this case, the maximal Lyapunov exponent is λM=0.076>0. The corresponding stroboscopic map is illustrated in Fig. 3(c). Here, similar to the previous case, three different initial conditions are chosen and the corresponding prediction results are shown in Fig. 4(d)4(f). Again, as observed with Fig. 4(a)4(c), regardless of the initial condition on the chaotic attractor, the neural machine can be used to predict the response over a longer time horizon than the initial input time window.

As noted in Table I, for α>0 and β>0, the system behaves as a monostable, Duffing oscillator with hardening characteristics. Here, the parameter values of α=5,β=1,δ=0.02, and ω=0.5 are fixed for the numerical studies. After substituting these parameter values into Eq. (6), the resulting nondimensional form of the Duffing equation reads as follows:

x¨+0.02x˙+x+5x3=γcos(0.5t).
(9)

Similar to the previous case of the bistable, Duffing oscillator, the parameter γ is chosen as a control parameter and varied over a sufficient range to observe the chaotic response.19 The numerically obtained bifurcation diagram is shown in Fig. 5(a). Again, a positive maximal Lyapunov exponent value λM confirms the chaotic nature of the response.

FIG. 5.

Plots of numerically obtained results for a hardening Duffing oscillator given by Eq. (??). In plot (a), the authors show a bifurcation diagram on a Poincaré section with respect to the forcing amplitude γ. Plot (b) has the corresponding stroboscopic map at γ=7. In plot (c), the prediction result for one particular case is presented. The black curves correspond to neural machine input, and the blue dots are the prediction results generated from the neural machine. The red curves are the ground truth data which are plotted along with the forecasted results for the sake of comparison. Neural network parameters: batch size: 64, learning rate: 0.001, LSTM dimension: 128, number of stacked layers for both the encoder and decoder: 2, keep rate: 0.95, number of inhibitors: 2. (a) Bifurcation diagram on a Poincaré section constructed using the forcing frequency. (b) Stroboscopic map. (c) Prediction case.

FIG. 5.

Plots of numerically obtained results for a hardening Duffing oscillator given by Eq. (??). In plot (a), the authors show a bifurcation diagram on a Poincaré section with respect to the forcing amplitude γ. Plot (b) has the corresponding stroboscopic map at γ=7. In plot (c), the prediction result for one particular case is presented. The black curves correspond to neural machine input, and the blue dots are the prediction results generated from the neural machine. The red curves are the ground truth data which are plotted along with the forecasted results for the sake of comparison. Neural network parameters: batch size: 64, learning rate: 0.001, LSTM dimension: 128, number of stacked layers for both the encoder and decoder: 2, keep rate: 0.95, number of inhibitors: 2. (a) Bifurcation diagram on a Poincaré section constructed using the forcing frequency. (b) Stroboscopic map. (c) Prediction case.

Close modal

Here, the authors have considered a chaotic response observed with the forcing amplitude of γ=7. The corresponding stroboscopic map of the response is shown in Fig. 5(b). Similar to the numerical studies with the bistable, Duffing oscillator, different initial conditions are chosen on the chaotic attractor. The prediction results obtained for one initial condition are shown in Fig. 5(c) as a representative result. Again, with a short window of input, one is able to make predictions over windows longer than the initial input window. However, the neural machine parameters are different in this case.

Experimental studies were conducted with a forced bistable, Duffing oscillator prototype with softening characteristic. Similar to the numerical studies, the forcing amplitude γ and forcing frequency ω were chosen so that the response of the system is on a chaotic attractor. After parametric identification through curve fitting the experimentally obtained frequency response curve to analytically obtained response curve, the numerically obtained bifurcation diagram for the experimental system is used to identify the forcing amplitude γ and forcing frequency ω that can be used to generate chaotic responses.19 The experimental studies for these parameter values confirm the presence of chaotic dynamics. The experimental time series data are obtained by repeating the experiments for different values of the forcing amplitude γ in the chaotic region. Data are collected after the motions settle down on a chaotic attractor.

As with the numerical data, a portion of the collected experimental data is used as an input to neural machine and the output is compared with ground truth data obtained from the experiments. The prediction results obtained with the neural machine are presented in Fig. 6. The neural machine predictions are found to follow the experimental data well, and the forecasting windows are longer than the input windows, which are different in the three cases considered. However, with the constructed neural machine, the authors are able to forecast the response reasonably well for ten Lyapunov times. It should be noted that with the help of hyperparameter tuning, larger numbers of hidden layers and inhibitors, the deep neural network can be use for forecasting over longer time horizons. The neural machine parameters are different from those used for forecasting with numerical datasets.

FIG. 6.

Experimental case study. In plots (a)–(c), three different response time histories are considered. The black curves are input data segments, and the blue dots are the prediction results generated from the neural machine. The red curves represent experimentally obtained ground truth data, which are plotted with the forecasted results for the sake of comparison. With the help of hyperparameter tuning, the deep neural network is found to provide forecasting over a long Lyapunov time with high numbers of hidden layers and inhibitors. Neural network parameters: batch size: 64, learning rate: 0.001, LSTM dimension: 512, number of stacked layers for both encoder and decoder: 4, keep rate: 0.95, number of inhibitors: 3. (a) Prediction Number 1. (b) Prediction Number 2. (c) Prediction Number 3.

FIG. 6.

Experimental case study. In plots (a)–(c), three different response time histories are considered. The black curves are input data segments, and the blue dots are the prediction results generated from the neural machine. The red curves represent experimentally obtained ground truth data, which are plotted with the forecasted results for the sake of comparison. With the help of hyperparameter tuning, the deep neural network is found to provide forecasting over a long Lyapunov time with high numbers of hidden layers and inhibitors. Neural network parameters: batch size: 64, learning rate: 0.001, LSTM dimension: 512, number of stacked layers for both encoder and decoder: 4, keep rate: 0.95, number of inhibitors: 3. (a) Prediction Number 1. (b) Prediction Number 2. (c) Prediction Number 3.

Close modal

In this paper, the authors have used a neural machine in the form of a deep recurrent neural network with the LSTM cell based encoder and decoder, along with an inhibitor mechanism and illustrated the response forecasting ability of this system for non-autonomous, chaotic systems. In particular, as a representative example, forced Duffing oscillators with hardening and softening characteristics are considered in numerical and experimental studies. The data obtained from these studies are used to train the neural machine and make response predictions over longer time horizons compared to the initial input data window. As discussed, with different choices of the neural machine parameters, one can influence the response horizon. The results presented here represent the first of its type for response forecasting for a non-autonomous chaotic system based on experimental data by using a neural machine. Given the prevalence of non-autonomous systems in different fields of engineering and beyond, the promising results obtained here can serve as a catalyst for future neural machine based response forecasting efforts.

The authors gratefully acknowledge support received for this work through NSF Grant Nos. CMMI1436141 and CMMI1854532 and the computational resources provided by the Maryland Advanced Research Computing Center (MARCC) for this work.

The data that support the findings of this study are available as time series data within the article.

1.
I. T.
Georgiou
and
I. B.
Schwartz
, “
The slow invariant manifold of a conservative pendulum-oscillator system
,”
Int. J. Bifurc. Chaos
6
,
673
692
(
1996
).
2.
V.
In
,
M. L.
Spano
,
J. D.
Neff
,
W. L.
Ditto
,
C. S.
Daw
,
K. D.
Edwards
, and
K.
Nguyen
, “
Maintenance of chaos in a computational model of a thermal pulse combustor
,”
Chaos
7
,
605
613
(
1997
).
3.
M.
Perc
and
M.
Marhl
, “
Chaos in temporarily destabilized regular systems with the slow passage effect
,”
Chaos Solitons Fractals
27
,
395
403
(
2006
).
4.
W.
Yang
,
M.
Ding
,
A. J.
Mandell
, and
E.
Ott
, “
Preserving chaos: Control strategies to preserve complex dynamics with potential relevance to biological disorders
,”
Phys. Rev. E
51
,
102
(
1995
).
5.
A.
Hastings
,
C. L.
Hom
,
S.
Ellner
,
P.
Turchin
, and
H. C. J.
Godfray
, “
Chaos in ecology: Is mother nature a strange attractor?
,”
Annu. Rev. Ecol. Syst.
24
,
1
33
(
1993
).
6.
R.
Wang
and
B.
Balachandran
, “
Extreme wave formation in unidirectional sea due to stochastic wave phase dynamics
,”
Phys. Lett. A
382
,
1864
1872
(
2018
).
7.
D. A.
Hsieh
, “
Chaos and nonlinear dynamics: Application to financial markets
,”
J. Fin.
46
,
1839
1877
(
1991
).
8.
R.
Wang
,
E.
Kalnay
, and
B.
Balachandran
, “
Neural machine-based forecasting of chaotic dynamics
,”
Nonlinear Dyn.
98
,
2903
2917
(
2019
).
9.
A.
Karimi
and
M. R.
Paul
, “
Extensive chaos in the Lorenz-96 model
,”
Chaos
20
,
043105
(
2010
).
10.
N. A.
Kudryashov
, “
Exact solutions of the generalized Kuramoto-Sivashinsky equation
,”
Phys. Lett. A
147
,
287
291
(
1990
).
11.
P. R.
Vlachas
,
W.
Byeon
,
Z. Y.
Wan
,
T. P.
Sapsis
, and
P.
Koumoutsakos
, “
Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks
,”
Proc. R. Soc. A
474
,
20170844
(
2018
).
12.
J.
Pathak
,
B.
Hunt
,
M.
Girvan
,
Z.
Lu
, and
E.
Ott
, “
Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach
,”
Phys. Rev. Lett.
120
,
024102
(
2018
).
13.
G.
Lemaître
, “
Evolution of the expanding universe
,”
Proc. Natl. Acad. Sci. USA
20
,
12
(
1934
).
14.
S. W.
Hawking
and
G. F. R.
Ellis
,
The Large Scale Structure of Space-Time
(
Cambridge University Press
,
1973
), Vol. 1.
15.
S. M.
Reppert
and
D. R.
Weaver
, “
Coordination of circadian timing in mammals
,”
Nature
418
,
935
941
(
2002
).
16.
A. H.
Nayfeh
and
D. T.
Mook
,
Nonlinear Oscillations
(
John Wiley & Sons
,
2008
).
17.
A.
Mallik
, National Conference on Nonlinear Systems and Dynamics, NCNSD-2003 (Indian Institute of Technology, Kharagpur, India, 2003).
18.
V. K.
Agarwal
, “
Response control in nonlinear systems with noise
,” Ph.D. thesis (University of Maryland, College Park, 2019).
19.
V.
Agarwal
,
J. A.
Yorke
, and
B.
Balachandran
, “
Noise-induced chaotic-attractor escape route
,”
Nonlinear Dyn.
98
,
2903
2917
(
2020
).
20.
F.
Moon
,
Experiments on Chaotic Motions of a Forced Nonlinear Oscillator: Strange Attractors
(
American Society of Mechanical Engineers
,
1980
), Vol. 47, pp.
638
644
.
21.
J.
Gottwald
,
L.
Virgin
, and
E.
Dowell
, “
Experimental mimicry of Duffing’s equation
,”
J. Sound Vib.
158
,
447
467
(
1992
).
22.
M.
Todd
and
L.
Virgin
,
An Experimental Verification of Basin Metamorphoses in a Nonlinear Mechanical System
(
World Scientific
,
1997
), Vol. 7, pp.
1337
1357
.
23.
V.
Agarwal
and
B.
Balachandran
, “Noise-influenced response of Duffing oscillator,” in ASME 2015 International Mechanical Engineering Congress and Exposition (American Society of Mechanical Engineers, 2015), p. V04BT04A019.
24.
V.
Agarwal
,
X.
Zheng
, and
B.
Balachandran
, “
Influence of noise on frequency responses of softening Duffing oscillators
,”
Phys. Lett. A
382
,
3355
3364
(
2018
).
25.
V.
Agarwal
,
J.
Sabuco
, and
B.
Balachandran
, “
Safe regions with partial control of a chaotic system in the presence of white gaussian noise
,”
Int. J. Non Linear Mech.
94
,
3
11
(
2017
).