A general, variational approach to derive low-order reduced models from possibly non-autonomous systems is presented. The approach is based on the concept of optimal parameterizing manifold (OPM) that substitutes more classical notions of invariant or slow manifolds when the breakdown of “slaving” occurs, i.e., when the unresolved variables cannot be expressed as an exact functional of the resolved ones anymore. The OPM provides, within a given class of parameterizations of the unresolved variables, the manifold that averages out optimally these variables as conditioned on the resolved ones. The class of parameterizations retained here is that of continuous deformations of parameterizations rigorously valid near the onset of instability. These deformations are produced through the integration of auxiliary backward–forward systems built from the model’s equations and lead to analytic formulas for parameterizations. In this modus operandi, the backward integration time is the key parameter to select per scale/variable to parameterize in order to derive the relevant parameterizations which are doomed to be no longer exact away from instability onset due to the breakdown of slaving typically encountered, e.g., for chaotic regimes. The selection criterion is then made through data-informed minimization of a least-square parameterization defect. It is thus shown through optimization of the backward integration time per scale/variable to parameterize, that skilled OPM reduced systems can be derived for predicting with accuracy higher-order critical transitions or catastrophic tipping phenomena, while training our parameterization formulas for regimes prior to these transitions takes place.
We introduce a framework for model reduction to produce analytic formulas to parameterize the neglected variables. These parameterizations are built from the model’s equations in which only a scalar is left to calibrate per scale/variable to parameterize. This calibration is accomplished through a data-informed minimization of a least-square parameterization defect. By their analytic fabric, the resulting parameterizations benefit physical interpretability. Furthermore, our hybrid framework—analytic and data-informed—enables us to bypass the so-called extrapolation problem, known to be an important issue for purely data-driven machine-learned parameterizations. Here, by training our parameterizations prior to transitions, we are able to perform accurate predictions of these transitions via the corresponding reduced systems.
I. INTRODUCTION
The framework adopted is that of Ref. 12, which allows for deriving analytic parameterizations of the neglected scales that represent efficiently the nonlinear interactions with the retained, resolved variables. The originality of the approach of Ref. 12 is that the parameterizations are hybrid in their nature in the sense that they are both model-adaptive, based on the dynamical equations, and data-informed by high-resolution simulations.
For a given system, the optimization of these parameterizations benefits, thus, from their analytical origin resulting in only a few parameters to learn over short-in-time training intervals, mainly one scalar parameter per scale to parameterize. Their analytic fabric contributes to their physical interpretability compared to, e.g., parameterizations that would be machine learned. The approach is applicable to deterministic systems, finite- or infinite-dimensional, and is based on the concept of optimal parameterizing manifold (OPM) that substitutes the more classical notion of slow or invariant manifolds when there takes place a breakdown of “slaving” relationships between the resolved and unresolved variables,12 i.e., when the latter cannot be expressed as an exact functional of the formers anymore.
By construction, the OPM takes its origin in a variational approach. The OPM is the manifold that averages out optimally the neglected variables as conditioned on the current state of the resolved ones, Refs. 12, Theorem 4. The OPM allows for computing approximations of the conditional expectation term arising in the Mori–Zwanzig approach to stochastic modeling of neglected variables;13–17 see also Theorem 5 in Ref. 12 as well as Refs. 18 and 19.
The approach introduced in Ref. 12 to determine OPMs, in practice, consists of first deriving analytic parametric formulas that match rigorous leading approximations of unstable/center manifolds or slow manifolds near, e.g., the onset of instability (Ref. 12, Sec. 2) and then to perform a data-driven optimization of the manifold formulas’ parameters to handle regimes further away from that instability onset (Ref. 12, Sec. 4). In other words, efficient parameterizations away from the onset are obtained as continuous deformations of parameterizations near the onset; deformations that are optimized by minimizing cost functionals tailored to the dynamics and measuring the defect of parameterization.
There, the optimization stage allows for alleviating the small denominator problems rooted in small spectral gaps and for improving the parameterization of small-energy but dynamically important variables. Thereby, relevant parameterizations are derived in regimes where constraining spectral gap or timescale separation conditions are responsible for the well-known failure of standard invariant/inertial or slow manifolds.12,20–25 As a result, the OPM approach provides (i) a natural remedy to the excessive backscatter transfer of energy to the low modes classically encountered in turbulence (Ref. 12, Sec. 6) and (ii) allows for deriving optimal models of the slow motion for fast-slow systems not necessarily in the presence of timescale separation.18,19 Due to their optimal nature, OPMs allow also for providing accurate parameterizations of dynamically important small-energy variables; a well-known issue encountered in the closure of chaotic dynamics and related to (i).
This work examines the ability of the theory-guided and data-informed parameterization approach of Ref. 12 in deriving reduced systems able to predict higher-order transitions or catastrophic tipping phenomena, when the original, full system is possibly subject to time-dependent perturbations. From a data-driven perspective, this problem is tied to the so-called extrapolation problem, known, for instance, to be an important issue in machine learning, requiring more advanced methods such as, e.g., transfer learning.26 While the past few decades have witnessed a surge and advances of many data-driven reduced-order modeling methodologies,27,28 the prediction of non-trivial dynamics for parameter regimes not included in the training dataset remains a challenging task. Here, the OPM approach by its hybrid framework—analytic and data-informed—allows us to bypass this extrapolation problem at a minimal cost in terms of learning efforts as illustrated in Secs. IV and V. As shown below, the training of OPMs at parameter values prior the transitions take place is sufficient to perform accurate predictions of these transitions via OPM reduced systems.
The remainder of this paper is organized as follows. We first survey in Sec. II the OPM approach and provide the general variational parameterization formulas for model reduction in the presence of a time-dependent forcing. We then expand in Sec. III on the backward–forward (BF) systems method12,29 to derive these formulas, clarifying fundamental relationships with homological equations arising in normal forms and invariant manifold theories.12,30–32 Section III C completes this analysis by analytic versions of these formulas in view of applications. The ability of predicting noise-induced transitions and catastrophic tipping phenomena33,34 through OPM reduced systems is illustrated in Sec. IV for a system arising in the study of thermohaline circulation. Successes in predicting higher-order transitions such as period-doubling and chaos are reported in Sec. V for a Rayleigh–Bénard problem, and contrasted by comparison with the difficulties encountered by standard manifold parameterization approaches in Appendix D. We then summarize the findings of this article with some concluding remarks in Sec. VI.
II. VARIATIONAL PARAMETERIZATIONS FOR MODEL REDUCTION
We summarize in this section the notion of variational parameterizations introduced in Ref. 12. The state space is decomposed as the sum of the subspace, , of resolved variables (coarse-scale), and the subspace, , of unresolved variables (small-scale). In practice, is spanned by the first few eigenmodes of with dominant real parts (e.g., unstable) and by the rest of the modes, typically stable.
Thus, minimizing each (in the -variable) is a natural idea to enforce closeness of in a least-squares sense to the manifold . Panel (a) in Fig. 1 illustrates (5) for the -component: The optimal parameterization, , minimizing (4) is shown for a case where the dynamics is transverse to it (e.g., in the absence of slaving), while provides the best parameterization in a least-squares sense.
Panel (a): The black curve represents the training trajectory, here shown to be transverse to the parameterizing manifolds (i.e., absence of exact slaving). The gray smooth curves represent the time-dependent OPM aimed at tracking the state of the neglected variable at time as a function of the resolved variables . Panel (b): A schematic of the dependence on of the parameterization defect given by (4). The red asterisk marks the minimum of achieved for .
Panel (a): The black curve represents the training trajectory, here shown to be transverse to the parameterizing manifolds (i.e., absence of exact slaving). The gray smooth curves represent the time-dependent OPM aimed at tracking the state of the neglected variable at time as a function of the resolved variables . Panel (b): A schematic of the dependence on of the parameterization defect given by (4). The red asterisk marks the minimum of achieved for .
In practice, the normalized parameterization defect, , is often used to compare different parameterizations. It is defined as . For instance, the flat manifold corresponding to no parameterization ( ) of the neglected variables (Galerkin approximations) comes with for all , while a manifold corresponding to a perfect slaving relationship between and ’s, comes with . When for all , the underlying manifold will be referred to as a parameterizing manifold (PM). Once the parameters are optimized by minimization of (4), the resulting manifold will be referred to as the optimal parameterizing manifold (OPM).35
We conclude this section by a few words of practical considerations. As documented in Ref. 12, Secs. 5 and 6, the amount of training data required in order to reach a robust estimation of the optimal backward integration time , is often comparable to the dynamics’ time horizon that is necessary to resolve the decay of correlations for the high-mode amplitude . For multiscale systems, one thus often needs to dispose of a training dataset sufficiently large to resolve the slowest decay of temporal correlations of the scales to parameterize. On the other hand, by benefiting from their dynamical origin, i.e., through the model’s equations, the parameterizations formulas employed in this study (see Sec. III) allow often for reaching out, in practice, satisfactory OPM approximations when optimized over training intervals shorter than these characteristic timescales.
When these conditions are met, the minimization of the parameterization defect (4) is performed by a simple gradient-descent algorithm (Ref. 12, Appendix). There, the first local minimum that one reaches corresponds often to the OPM; see Secs. IV and V and Ref. 12, Secs. 5 and 6. In the rare occasions where the parameterization defect exhibits more than one local minimum and the corresponding local minima are close to each others, criteria involving colinearity between the features to parameterize and the parameterization itself can be designed to further assist the OPM selection. Section V D illustrates this point with the notion of parameterization correlation.
III. VARIATIONAL PARAMETERIZATIONS: EXPLICIT FORMULAS
A. Homological equation and backward–forward systems: Time-dependent forcing
Another key element was pointed out in Ref. 29, Chap. 4, in the quest of getting new insights about invariant manifolds in general and more specifically concerned with the approximation of stochastic invariant manifolds of stochastic PDEs,36 along with the rigorous low-dimensional approximation of their dynamics.37 It consists of the method of backward–forward (BF) systems, thereafter revealed in Ref. 12 as a key tool to produce parameterizations (based on model’s equations) that are relevant beyond the domain of applicability of invariant manifold theory, i.e., away the onset of instability.
We want to gain into interpretability of such parameterizations in the context of forced-dissipative systems such as Eq. (1). For this purpose, we clarify the fundamental equation satisfied by our parameterizations; the goal being here to characterize the analog of (7) in this non-autonomous context. To simplify, we restrict to the case . The next lemma provides the sought equation.
The proof of this lemma is found in Appendix A.
As it will become apparent below, the -dependence of the terms in (I) is meant to control the small denominators that arise in the presence of small spectral gap between the spectrum of and that leads typically to over-parameterization when standard invariant/inertial manifold theory is applied in such situations; see Remark III.1 and Ref. 12, Sec. 6.
The terms in (II) are responsible for the presence of exogenous memory terms in the solution to the homological equation Eq. (16), i.e., in the parameterization ; see Eq. (31).
Thus, in view of Lemma III.1 and what precedes, the small-scale parameterizations (15) obtained by solving the BF system (11) over finite time intervals can be conceptualized as perturbed solutions to the homological equation Eq. (18) arising in the computation of invariant and normal forms of non-autonomous dynamical systems Ref. 31, Sec. 2.2. The perturbative terms brought by the -dependence play an essential role to cover situations beyond the domain of application of normal form and invariant manifold theories. As explained in Sec. III C and illustrated in Secs. IV and V, these terms can be optimized to ensure skillful parameterizations for predicting, by reduced systems, higher-order transitions escaping the domain of validity of these theories.
B. Homological equation and backward–forward systems: Constant forcing
Assume that is constant in time and that is diagonal under its eigenbasis . Assume furthermore that and , i.e., contains only stable eigenmodes.
Conditions similar to (24) arise in the smooth linearization of dynamical systems near an equilibrium.42 Here, condition (24) implies that the eigenvalues of the stable part satisfy a Sternberg condition of order 42 with respect to the eigenvalues associated with the modes spanning the reduced state space .
This theorem is essentially a consequence of Ref. 12, Theorems 1 and 2, in which condition (24) is a stronger version of that used for Ref. 12, Theorem 2; see also Ref. 12, Remark 1 (iv). This condition is necessary and sufficient here for to be well defined. The convergence of is straightforward since by assumption and is constant. The derivation of (25) follows the same lines as the derivation for Ref. 12, Eq. (4.6).
One can also generalize Theorem III.1 to the case when is time-dependent provided that satisfies suitable conditions to ensure given by (22) to be well defined and that the non-resonance condition (24) is suitably augmented. We leave the precise statement of such a generalization to a future work. For the applications considered in later sections, the forcing term is either a constant or with a sublinear growth. For such cases, is always well defined under the assumptions of Theorem III.1. We turn now to present the explicit formulas of the parameterizations based on the BF system (11).
C. Explicit formulas for variational parameterizations
We provide in this section, closed form formulas of parameterizations for forced dissipative such as Eq. (1). We consider first the case where is constant in time and then deal with time-dependent forcing case.
1. The constant forcing case
In the case the nonlinearity is quadratic, denoted by , such formulas are easily accessible as (27) is integrable. It is actually integrable in the presence of higher-order terms, but we provide the details here for the quadratic case, leaving the details to the interested reader for extension.
(OPM balances small spectral gaps)
Theorem III.1 teaches us that when not only defined in (29) converges toward a well-defined quantity as but also the coefficients involved in [see (B1)–(B3)], in the case of existence of invariant manifold. For parameter regimes where the latter fails to exist, some of the or the involved in (B2) and (B3) can become small, leading to the well-known small spectral gap issue25 manifested typically by over-parameterization of the ’s mode amplitude when the Lyapunov–Perron parameterization (22) is employed; see Ref. 12, Sec. 6. The presence of the through the exponential terms in (29) and (B2) and (B3) allows for balancing these small spectral gaps after optimization and improve notably the parameterization and closure skills; see Sec. V and Appendix D.
2. The time-dependent forcing case
This formula of gives the solution to the homological equation Eq. (16) of Lemma III.1 in which therein is replaced here by , the projector onto the mode whose amplitude is parameterized by . Clearly, the terms in (I) are produced by the time-dependent forcing. They are functionals of the past and convey thus exogenous memory effects.
This latter computational aspect is important not only for simulation purposes when the corresponding OPM reduced system is ran online but also for training purposes, in the search of the optimal during the offline minimization stage of the parameterization defect.
If time-dependent forcing terms are present in the reduced state space , then the BF system (27) can still be solved analytically albeit leading to more involved integral terms than in (I) in the corresponding parameterization. This aspect will be detailed elsewhere.
3. OPM reduced systems
We emphasize finally that from a data-driven perspective, the OPM reduced system benefits from its dynamical origin. By construction, only a scalar parameter is indeed optimized per mode to parameterize. This parameter benefits furthermore from a dynamical interpretation since it plays a key role in balancing the small spectral gaps known as to be the main issue in applications of invariant or inertial manifold theory in practice;25 see Remark III.1.
IV. PREDICTING TIPPING POINTS
A. The Stommel–Cessi model and tipping points
A simple model for oceanic circulation showing bistability is Stommel’s box model,50 where the ocean is represented by two boxes, a low-latitude box with temperature and salinity , and a high-latitude box with temperature and salinity ; see Ref. 51, Fig. 1. The Stommel model can be viewed as the simplest “thermodynamic” version of the Atlantic Meridional Overturning Circulation (AMOC) (Ref. 52, Chap. 6), a major ocean current system transporting warm surface waters toward the northern Atlantic that constitutes an important tipping point element of the climate system; see Refs. 53 and 54.
Cessi in Ref. 51 proposed a variation of this model, based on the box model of Ref. 55, consisting of an ODE system describing the evolution of the differences and ; see Ref. 51, Eq. (2.3). The Cessi model trades the absolute functions involved in the original Stommel model by polynomial relations more prone to analysis. The dynamics of and are coupled via the density difference , approximated by the relation which induces an exchange of mass between the boxes to be given as according to Cessi’s formulation, where denotes the Poiseuille transport coefficient, is the volume of a box, and is the diffusion timescale. The coefficient is a coefficient inversely proportional to the practical salinity unit, i.e., unit based on the properties of sea water conductivity while is in ; see Ref. 51. In this simple model, relaxes at a rate to a reference temperature (with and ) in the absence of coupling between the boxes.
The nonlinear exchange of mass between the boxes is reflected by the nonlinear coupling terms in Eq. (36). These nonlinear terms lead to multiple equilibria in certain parts of the parameter space, in particular, when is varied over a certain range , while and are fixed. One can even prove that Eq. (36) experiences two saddle-node bifurcations,57 leading to a typical S-shaped bifurcation diagram; see Fig. 2(a).
Panel (a): The S-shaped bifurcation of the Stommel–Cessi model (36) as the parameter is varied, shown here for and . The two branches of locally stable steady states are marked by the solid black curves, and the other branch of unstable steady states are marked by the dashed black curve. The two vertical gray lines mark, respectively, and , at which the saddle-node bifurcations occur. Panel (b): Parameterization of by the OPM, given by (43), where is fixed to be marked out by the vertical green line in panel (a) and the noise strength parameter in (40) is taken to be , leading to and in (41). Panel (c): The normalized parameterization defect for as is varied. Panel (d): The performance of the OPM reduced Eq. (45) in reproducing the noise-induced transitions experienced by from the stochastically forced Stommel–Cessi model (40). Once (45) is solved, the approximation of is constructed using (46).
Panel (a): The S-shaped bifurcation of the Stommel–Cessi model (36) as the parameter is varied, shown here for and . The two branches of locally stable steady states are marked by the solid black curves, and the other branch of unstable steady states are marked by the dashed black curve. The two vertical gray lines mark, respectively, and , at which the saddle-node bifurcations occur. Panel (b): Parameterization of by the OPM, given by (43), where is fixed to be marked out by the vertical green line in panel (a) and the noise strength parameter in (40) is taken to be , leading to and in (41). Panel (c): The normalized parameterization defect for as is varied. Panel (d): The performance of the OPM reduced Eq. (45) in reproducing the noise-induced transitions experienced by from the stochastically forced Stommel–Cessi model (40). Once (45) is solved, the approximation of is constructed using (46).
S-shaped bifurcation diagrams occur in oceanic models that go well beyond Eq. (36) such as based on the hydrostatic primitive equations or Boussinesq equations; see, e.g., Refs. 58–61. More generally, S-shaped bifurcation diagrams and more complex multiplicity diagrams are known to occur in a broad class of nonlinear problems62–64 that include energy balance climate models,65–68 population dynamics models,69,70 vegetation pattern models,71,72 combustion models,73–76 and many other fields.77
The very presence of such S-shaped bifurcation diagrams provides the skeleton for tipping point phenomena to take place when such models are subject to the appropriate stochastic disturbances and parameter drift, causing the system to “tip” or move away from one branch of attractors to another; see Refs. 33 and 34. From an observational viewpoint, the study of tipping points has gained a considerable attention due to their role in climate change as a few components of the climate system (e.g., Amazon forest and the AMOC) have been identified as candidates for experiencing such critical transitions if forced beyond the point of no return.53,54,78
Whatever the context, tipping phenomena are due to a subtle interplay between nonlinearity, slow parameter drift, and fast disturbances. To better understand how these interactions cooperate to produce a tipping phenomenon could help improve the design of early warning signals. However, we will not focus on this latter point per se in this study, we show below, on the Cessi model that the OPM framework provides useful insights in that perspective, by demonstrating its ability of deriving reduced models to predict accurately the crossing of a tipping point; see Sec. IV C.
B. OPM results for a fixed F value: Noise-induced transitions
We report in this section on the OPM framework skills to derive accurate reduced models to reproduce noise-induced transitions experienced by the Cessi model (36), when subject to fast disturbances for a fixed value of . The training of the OPM operated here serves as a basis for the harder tipping point prediction problem dealt with in Sec. IV C, when is allowed to drift slowly through the critical value at which the lower branch of steady states experiences a saddle-node bifurcation manifested by a turning point; see Fig. 2(a) again. Recall that denotes here the -value corresponding to the turning point experienced by the upper branch.
Reformulation of the Cessi model (36). The 1D OPM reduced equation is obtained as follows. First, we fix an arbitrary value of in that is denoted by and marked by the green vertical dashed line in Fig. 2(a). The system (36) is then rewritten for the fluctuation variables and , where denotes the steady state of Eq. (36) in the lower branch when .
The eigenvalues of have negative real parts since is locally stable. We assume that the matrix has two distinct real eigenvalues, which turns out to be the case for a broad range of parameter regimes. As in Sec. III, the spectral elements of the matrix (respectively, ) are denoted by , for . These eigenmodes are normalized to satisfy and are bi-orthogonal otherwise.
Derivation of the OPM reduced equation. We can now use the formulas of Sec. III C to obtain an explicit variational parameterization of the most stable direction carrying the variable here, in terms of the least stable one, carrying .
For Eq. (41), both of the forcing terms appearing in the - and -equations of the corresponding BF system (27) are stochastic (and thus time-dependent). To simplify, we omit the stochastic forcing in Eq. (27a) and work with the OPM formula given by (31) which, as shown below, is sufficient for deriving an efficient reduced system.
Numerical results. The OPM reduced equation Eq. (45) is able to reproduce the dynamics of for a wide range of parameter regimes. We show in Fig. 2 a typical example of skills for parameter values of , , , and as listed in Table I. Since lies in (see Table I), the Cessi model (36) admits three steady states among which two are locally stable (lower and upper branches) and the other one is unstable (middle branch); see Fig. 2(a) again. For this choice of , the steady state corresponding to the lower branch is with , , and the eigenvalues of are and .
Parameter values.
. | σ . | μ . | . | . | Fref . |
---|---|---|---|---|---|
0.1 | 6.2 | 0.8513 | 0.8821 | 0.855 |
. | σ . | μ . | . | . | Fref . |
---|---|---|---|---|---|
0.1 | 6.2 | 0.8513 | 0.8821 | 0.855 |
The offline trajectory used as input for training to find the optimal is taken as driven by an arbitrary Brownian path from Eq. (41) for in the time interval . The resulting offline skills of the OPM, , are shown as the blue curve in Fig. 2(b), while the original targeted time series to parameterize is shown in black. The optimal that minimizes the normalized parameterization defect turns out to be for the considered regime, as shown in Fig. 2(c). One observes that the OPM captures, in average, the fluctuations of ; compare blue curve with black curve.
The skills of the corresponding OPM reduced Eq. (45) are shown in Fig. 2(d), after converting back to the original -variable using (46). The results are shown out of sample, i.e., for another noise path and over a time interval different from the one used for training. The 1D reduced OPM reduced Eq. (45) is able to capture the multiple noise-induced transitions occurring across the two equilibria (marked by the cyan lines), after transforming back to the original -variable; compare red with black curves in Fig. 2(d). Both the Cessi model (40) and the reduced OPM equation are numerically integrated using the Euler–Maruyama scheme with time step .
Note that we chose here the numerical value of to be closer to than to for making more challenging the tipping point prediction experiment conducted in Sec. IV C. There, we indeed train the OPM for while aiming at predicting the tipping phenomenon as drifts slowly through (located thus further away) as time evolves.
C. Predicting the crossing of tipping points
We set now , , and , while and are kept as in Table I. Note that compared to the results shown in Fig. 2, the noise level is here substantially reduced to focus on the tipping phenomenon to occur in the vicinity of the turning point . The goal is thus to compare the behavior of solving the forced Cessi model (58) with that of the solution of the (forced) OPM reduced Eq. (59) as crosses the critical value .
The tipping phenomenon as predicted by the OPM reduced Eq. (59) (red curve) compared with that experienced by the full Cessi model (58) (black curve). The OPM is trained for as marked out by the vertical green line. Also shown in yellow is the result obtained from the slow manifold reduced Eq. (60).
The tipping phenomenon as predicted by the OPM reduced Eq. (59) (red curve) compared with that experienced by the full Cessi model (58) (black curve). The OPM is trained for as marked out by the vertical green line. Also shown in yellow is the result obtained from the slow manifold reduced Eq. (60).
The striking prediction result of Fig. 3 are shown for one noise realization. We explore now the accuracy in predicting such a tipping phenomenon by the OPM reduced Eq. (59) when the noise realization is varied.
To do so, we estimate the statistical distribution of the -value at which tipping takes place, denoted by , for both the Cessi model (58) and Eq. (59). These distributions are estimated as follows. We denote by the -component of the steady state at which the saddle-node bifurcation occurs in the lower-branch for . As noise is turned on and evolves slowly through , the solution path to Eq. (58) increases in average while fluctuating around the -value (due to small noise) before shooting off to the upper branch as one nears . During this process, there is a time instant, denoted by , such that for all , stay above . We denote the -value corresponding to as according to (57). Whatever the noise realization, the solution to the OPM reduced Eq. (59) experiences the same phenomenon leading thus to its own for a particular noise realization. As shown by the histograms in Fig. 4, the distribution of predicted by the OPM reduced Eq. (59) (blue curve) follows closely that computed from the full system (58) (orange bars). Thus, not only the OPM reduced Eq. (59) is qualitatively able to reproduce the passage through a tipping point (as shown in Fig. 3) but is also able to accurately predicting the critical -value (or time-instant) at which the tipping phenomenon takes place with an overall success rate over 99%.
Statistical distribution of the threshold value of at which the tipping phenomenon occurs for both the full Cessi model (58) (orange bar) and the OPM reduced Eq. (59) (blue curve). The histograms are computed based on arbitrarily fixed noise realizations.
V. PREDICTING HIGHER-ORDER CRITICAL TRANSITIONS
A. Problem formulation
Prediction experiments for the RB System (67). The parameter values rD ( < r*) correspond to the allowable upper bound for which the mean state dependence of on r is estimated, in view of extrapolation at r = rP. In each experiment, Ir = [r0, rD] with rD − r0 = 2 × 10−2, corresponding to segments show in orange in Fig. 5(d). The critical value r* indicates the parameter value at which the transition occurs, depending on the experiment. The parameter value rP > r* at which the prediction is sought is also given.
. | mc . | rD . | r* . | rP . |
---|---|---|---|---|
Experiment I (period-doubling) | 3 | 13.91 | 13.99 | 14 |
Experiment II (chaos) | 5 | 14.10 | 14.17 | 14.22 |
. | mc . | rD . | r* . | rP . |
---|---|---|---|---|
Experiment I (period-doubling) | 3 | 13.91 | 13.99 | 14 |
Experiment II (chaos) | 5 | 14.10 | 14.17 | 14.22 |
Our goal is to assess the ability of our variational parameterization framework for predicting higher-order transitions arising in (61) by training the OPM only with data prior to the transition we aim at predicting. The Rayleigh number is our control parameter. As it increases, Eq. (67) undergoes several critical transitions/bifurcations, leading to chaos via a period-doubling cascade.80 We focus on the prediction of two transition scenarios beyond Hopf bifurcation: (I) the period-doubling bifurcation and (II) the transition from a period-doubling regime to chaos. Noteworthy are the failures that standard invariant manifold theory or AIM encounter in the prediction of these transitions, here; see Appendix D.
B. Predicting higher-order transitions via OPM: Method
Thus, we aim at predicting for Eq. (67) two types of transition: (I) the period-doubling bifurcation and (II) the transition from period-doubling to chaos, referred to as Experiments I and II, respectively. For each of these experiments, we are given a reduced state space with as indicated in Table II, depending on the parameter regime. The eignemodes are here ranked by the real part of the corresponding eigenvalues, corresponding to the eigenvalue with the largest real part. The goal is to derive an OPM reduced system (34) able to predict such transitions. The challenge lies in the optimization stage of the OPM due to the prediction constraint, one is prevented to use data from the full model at the parameter which one desires to predict the dynamics. Only data prior to the critical value at which the concerned transition, either period-doubling or chaos takes place are here allowed.
Due to the dependence on of , as well as of the interaction coefficients and forcing terms , a particular attention to this dependence has to be paid. Indeed, recall that the parameterizations given by the explicit formula (28) depend heavily here on the spectral elements of linearized operator at and thus does its optimization. Since the goal is to eventually dispose of an OPM reduced system able to predict the dynamical behavior at , one cannot rely on data from the full model at and thus one cannot exploit, in particular, the knowledge of the mean state for . We are thus forced to estimate the latter for our purpose.
To do so, we estimate the dependence on of on an interval such that (see Table II), and use this estimation to extrapolate the value of at , which we denote by . For both experiments, it turned out that a linear extrapolation is sufficient. This extrapolated value allows us to compute the spectral elements of the operator given by (65) in which replaces the genuine mean state . Obviously, the better is the approximation of by , the better the approximation of by along with the corresponding eigenspace.
Prediction of transitions: OPM prediction results. Panel (d): The black curve shows the dependence on of the norm of the mean state vector, . The vertical dashed lines at and mark the onset of period-doubling bifurcation and chaos, respectively. The points and correspond to the -values at which the two prediction experiments are conducted; see Table II. The orange segments that precede the points and denote the parameter intervals over which data are used to build the OPM reduced system for predicting the dynamics at and , respectively; see Steps 1–4. The (normalized) parameterization defects are shown in panels (c) and (e) for training data at and , respectively. Panels (a) and (b): Global attractor (in lagged coordinates) and PSDs for three selected components at and , respectively. Black curves are from the full system (61), and red ones from the OPM reduced system (70).
Prediction of transitions: OPM prediction results. Panel (d): The black curve shows the dependence on of the norm of the mean state vector, . The vertical dashed lines at and mark the onset of period-doubling bifurcation and chaos, respectively. The points and correspond to the -values at which the two prediction experiments are conducted; see Table II. The orange segments that precede the points and denote the parameter intervals over which data are used to build the OPM reduced system for predicting the dynamics at and , respectively; see Steps 1–4. The (normalized) parameterization defects are shown in panels (c) and (e) for training data at and , respectively. Panels (a) and (b): Global attractor (in lagged coordinates) and PSDs for three selected components at and , respectively. Black curves are from the full system (61), and red ones from the OPM reduced system (70).
We can then summarize our approach for predicting higher-order transitions via OPM reduced systems as follows:
Extrapolation of at (the parameter at which one desires to predict the dynamics).
Computation of the spectral elements of the linearized operator at by replacing by in (65).
Training of the OPM using the spectral elements of Step 2 and data of the full model for .
Run the OPM reduced system (70) to predict the dynamics at .
We mention that the minimization of certain parameterization defects may require a special care such as for in Experiment II. Due to the presence of nearby local minima [see red curve in Fig. 5(e)], the analysis of the optimal value of to select for calibrating an optimal parameterization of is more subtle and exploits actually a complementary metric known as the parameterization correlation; see Sec. V D.
Obviously, the accuracy in approximating the genuine mean state by is a determining factor in the transition prediction procedure described in steps 1–4 above. Here, the relative error in approximating ( ) is for Experiment I and for Experiment II. For the latter case, although the parameter dependence is rough beyond (see Panel D), there is no brutal local variations of relatively large magnitude as identified for other nonlinear systems.18,81 Systems for which a linear response to parameter variation is a valid assumption,82,83 thus constitute seemingly a favorable ground to apply the proposed transition prediction procedure.
C. Prediction of higher-order transitions by OPM reduced systems
As summarized in Figs. 5(a) and 5(b), for each prediction experiment of Table II, the OPM reduced system (70) not only successfully predicts the occurrence of the targeted transition at either and [ and in Fig. 5(d)] but also approximates with good accuracy the embedded (global) attractor as well as key statistics of the dynamics such as the power spectral density (PSD). Note that for Experiment I (period-doubling), we choose the reduced dimension to be and to be for Experiment II (transition to chaos). For Experiment I, the global minimizers of given by (69) are retained to build up the corresponding OPM. In Experiment II, all the global minimizers are also retained except for from which the second minimizer is used for the construction of the OPM.
As the baseline, the skills of the OPM reduced system (70) are compared to those from reduced systems when parameterizations from invariant manifold theory such as (6) or from inertial manifold theory such as (30), are employed; see Theorem 2 in Ref. 12 and Remark III.2. The details and results are discussed in Appendix D. The main message is that compared to the OPM reduced system (70), the reduced systems based on these traditional inertial/inertial manifold theories fail in predicting the correct dynamics. A main player in this failure lies in the inability of these standard parameterizations to accurately approximate small-energy variables that are dynamically important; see Appendix D. The OPM parameterization by its variational nature enables to fix this over-parameterization issue here.
D. Distinguishing between close local minima: Parameterization correlation analysis
Since ’s coefficients depend nonlinearly on [see Eqs. (28) and (29) and (B1)], the parameterization defects, , defined in (69) are also highly nonlinear and may not be convex in the -variable as shown for certain variables in panels (c) and (e) of Fig. 5. A minimization algorithm to reach most often its global minimum is nevertheless detailed in Ref. 12, Appendix A and is not limited to the RB system analyzed here. In certain rare occasions, a local minimum may be an acceptable halting point with an online performance slightly improved compared to that of the global minimum. In such a case, one discriminates between a local minimum and the global one by typically inspecting another metric offline: the correlation angle that measures essentially the collinearity between the actual high-mode dynamics and its parameterization. Here, such a situation occurs for Experiment II; see in Fig. 5(e).
Following Ref. 12, Sec. 3.1.2, we recall thus a simple criterion to assist the selection of an OPM when there are more than one local minimum displayed by the parameterization defect and the corresponding local minimal values are close to each other.
We illustrate this point on shown in Fig. 5(e). The defect exhibits two close minima corresponding to and , occurring respectively at and . Thus, the parameterization defect alone does not help provide a satisfactory discriminatory diagnosis between these two minimizers. To the contrary, the parameterization correlation shown in Fig. 6 allows for diagnosing more clearly that the OPM associated with the local minimizer has a neat advantage compared to that associated with the global minimizer.
Parameterization correlation defined by (72) for the OPM in Experiment II (transition to chaos). The metric is here computed by using the full model solutions for in in (69), by making and corresponding to the global minimizer (green curve) and the nearby local minimizer (red curve), respectively; see Fig. 5(e).
Parameterization correlation defined by (72) for the OPM in Experiment II (transition to chaos). The metric is here computed by using the full model solutions for in in (69), by making and corresponding to the global minimizer (green curve) and the nearby local minimizer (red curve), respectively; see Fig. 5(e).
As explained in Ref. 12, Sec. 3.1.2, this parameterization correlation criterion is also useful to assist the selection of the dimension of the reduced state space. Indeed, once an OPM has been determined, the dimension of the reduced system should be chosen so that the parameterization correlation is sufficiently close to unity as measured, for instance, in the -norm over the training time window. The basic idea is that one should not only parameterize properly the statistical effects of the neglected variables but also avoid to lose their phase relationships with the unresolved variables.12,84 For instance, for predicting the transition to chaos, we observe that an OPM comes with a parameterization correlation much closer to unity in the case than (not shown).
VI. CONCLUDING REMARKS
In this article, we have described a general framework, based on the OPM approach introduced in Ref. 12 for autonomous systems, to derive effective (OPM) reduced systems that are able to predict either higher-order transitions caused by parameter regime shift or tipping phenomena caused by model’s stochastic disturbances and slow parameter drift. In each case, the OPMs are sought as continuous deformations of classical invariant manifolds to handle parameter regimes away from bifurcation onsets while keeping the reduced state space relatively low-dimensional. The underlying OPM parameterizations are derived analytically from model’s equations and constructed as explicit solutions to auxiliary BF systems such as Eq. (11), whose backward integration time is optimized per unresolved mode. This optimization involves the minimization of natural cost functionals tailored upon the full dynamics at parameter values prior to the transitions taking place; see (4). In each case—either for prediction of higher-order transitions or tipping points—we presented compelling evidence of the success of the OPM approach to address such extrapolation problems typically difficult to handle for data-driven reduction methods.
As reviewed in Sec. III, the BF systems such as Eq. (11) allow for drawing insightful relationships with the approximation theory of classical invariant/inertial manifolds (see Refs. 12, Sec. 2 and 31,40,85) when the backward integration time in Eq. (11) is sent to ; see Theorem III.1. Once the invariant/inertial manifolds fail to provide legitimate parameterizations, the optimization of the backward integration time may lead to local minima at finite of the parameterization defect, which if sufficiently small, gives access to skillful parameterizations to predict, e.g., higher-order transitions. This way, as illustrated in Sec. V (Experiment II), the OPM approach allows us to bypass the well-known stumbling blocks tied to the presence of small spectral gaps such as those encountered in inertial manifold theory25 and tied to the accurate parameterization of dynamically important small-energy variables; see also Remark III.1, Appendix D, and Ref. 12, Sec. 6.
To understand the dynamical mechanisms at play behind a tipping phenomenon and to predict its occurrence are of uttermost importance, but this task is hindered by the often high dimensionality nature of the underlying physical system. Devising accurate and analytically reduced models to be able to predict tipping phenomenon from complex system is thus of prime importance to serve understanding. The OPM results of Sec. IV indicate great promises for the OPM approach to tackle this important task for more complex systems. Other reduction methods to handle the prediction of tipping point dynamics have been proposed recently in the literature but mainly for mutualistic networks.86–88 The OPM formulas presented here are not limited to this kind of networks and can be directly applied to a broad class of spatially extended systems governed by (stochastic) PDEs or to nonlinear time-delay systems by adopting the framework of Ref. 89 for the latter to devise center manifold parameterizations and their OPM generalization; see Refs. 90 and 91.
The OPM approach could be also informative to design for such systems nearing a tipping point, early warning signal (EWS) indicators from multivariate time series. Extension of EWS techniques to multivariate data is an active field of research with methods ranging from empirical orthogonal functions reduction92 to methods exploiting the system’s Jacobian matrix and relationships with the cross-correlation matrix93 or exploiting the detection of spatial location of “hotspots” of stability loss.94,95 By its nonlinear modus operandi for reduction, the OPM parameterizations identify a subgroup of essential variables to characterize a tipping phenomenon which, in turn, could be very useful to construct the relevant observables of the system for the design of useful EWS indicators.
Thus, since the examples of Secs. IV and V are representative of more general problems of prime importance, the successes demonstrated by the OPM approach on these examples invites for further investigations for more complex systems.
Similarly, we would like to point out another important aspect in that perspective. For spatially extended systems, the modes involve typically wavenumbers that can help interpret certain primary physical patterns. Formulas such as (28) and (31) arising in OPM parameterizations involve a rebalancing of the interactions among such modes by means of the coefficients ; see Remark III.1. Thus, an OPM parameterization when skillful to derive efficient systems may provide useful insights into explaining emergent patterns due to certain nonlinear interactions of wavenumbers for regimes beyond the instability onset. For more complex systems that are dealt with here, it is known already that near the instability onset of primary bifurcations, center manifold theory provides such physical insights; see, e.g., Refs. 96–102.
By the approach proposed here, relying on OPMs obtained as continuous deformations of invariant/center manifolds, one can thus track the interactions that survive or emerge between certain wavenumbers as one progresses through higher-order transitions. Such insights bring new elements to potentially explain the low-frequency variability of recurrent large-scale patterns typically observed, e.g., in oceanic models,103,104 offering at least new lines of thoughts to the dynamical system approach proposed in the previous studies.104–107 In that perspective, the analytic expression (B1) of terms such as in (28) bring also new elements to break down the nonlinear interactions between the forcing of certain modes compared to others. Formulas such as (B1) extend to the case of time-dependent or stochastic forcing of the coarse-scale modes whose exact expression will be communicated elsewhere. As for the case of noise-induced transitions reported in Fig. 2(d), these generalized formulas are expected to provide, in particular, new insights into regime shifts not involving crossing a bifurcation point but tied to other mechanisms such as slow–fast cyclic transitions or stochastic resonances.108
Yet, another important practical aspect that deserves in-depth investigation is related to the robustness of the OPM approach subject to model errors. Such errors can arise from multiple sources, including, e.g., imperfection of the originally utilized high-fidelity model in capturing the true physics and noise contamination of the solution data used to train the OPM. For the examples of Secs. IV and V, since we train the OPM in a parameter regime prior to the occurrence of the concerned transitions, the utilized training data contain, de facto, model errors. The reported results in these sections show that the OPM is robust in providing effectively reduced models subject to such model errors. Nevertheless, it would provide useful insights if one can systematically quantify uncertainties of the OPM reduced models subject to various types of model errors. In that respect, several approaches could be beneficial for stochastic systems such as those based on the information-theoretic framework of Ref. 109 or the perturbation theory of ergodic Markov chains and linear response theory,110 as well as methods based on data-assimilation techniques.111
ACKNOWLEDGMENTS
We are grateful to the reviewers for their constructive comments, which helped enrich the discussion and presentation. This work has been partially supported by the Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) Grant (No. N00014-20-1-2023), the National Science Foundation Grant (No. DMS-2108856), and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 810370). We also acknowledge the computational resources provided by Advanced Research Computing at Virginia Tech for the results shown in Fig. 4.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Mickaël D. Chekroun: Conceptualization (equal); Formal analysis (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Writing – original draft (equal). Honghu Liu: Conceptualization (equal); Formal analysis (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Writing – original draft (equal). James C. McWilliams: Conceptualization (equal); Formal analysis (equal); Funding acquisition (equal); Investigation (supporting); Methodology (supporting); Writing – original draft (supporting).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
APPENDIX A: PROOF OF LEMMA III.1
APPENDIX B: THE LOW-ORDER TERM IN THE OPM PARAMETERIZATION
For the application to the Rayleigh—Bénard system considered in Sec. V, the forcing term is produced after rewriting the original unforced system in terms of the fluctuation variable with respect to the mean state. For this problem, the eigenvalues and the interaction coefficients both depend on the mean state.
APPENDIX C: NUMERICAL ASPECTS
In the numerical experiments of Sec. V, the full RB system (61) as well as the OPM reduced system (70) are numerically integrated using a standard fourth-order Runge–Kutta (RK4) method with a time-step size taken to be . Note that since the eigenvalues of are typically complex-valued, some care is needed when integrating (70). Indeed, since complex eigenmodes of must appear in complex conjugate pairs, the corresponding components of in (70) also form complex conjugate pairs. To prevent round-off errors that may disrupt the underlying complex conjugacy, after each RK4 time step, we enforce complex conjugacy as follows. Assuming that and form a conjugate pairs and that after an RK4 step, and where , and are real-valued with and , we redefine and to be, respectively, given by and . For each component corresponding to a real eigenmode, after each RK4 time step, we simply redefine it to be its real part and ignore the small imaginary part that may also arise from round-off errors. The same numerical procedure is adopted to integrate the reduced systems obtained from invariant manifold reduction as well as the FMT parameterization used in Appendix D.
APPENDIX D: FAILURE FROM STANDARD MANIFOLD PARAMETERIZATIONS
In this section, we briefly report on the reduction skills for the chaotic case of Sec. V C as achieved through application of the invariant manifold theory or standard formulas used in the inertial manifold theory. The invariant manifold theory is usually applied near a steady state. To work within a favorable ground for these theories, we first chose the steady state that is closest to the mean state at the parameter value , and we want to approximate the chaotic dynamics via a reduced system. This way, the chaotic dynamics is located near this steady state.
The predicted orbits obtained from reduced systems built either on the IM parameterization (D3) or the FMT one (D5) are shown in the top row of Fig. 7 as blue and magenta curves, respectively. Both reduced systems lead to periodic dynamics and thus fail dramatically in predicting the chaotic dynamics. The small spectral gap issue mentioned in Remark III.1 plays a role in explaining this failure but it is not the only culprit. Another fundamental issue for the closure of chaotic dynamics lies in the difficulty to provide accurate parameterization of small-energy but dynamically important variables; see, e.g., Ref. 12, Sec. 6. This issue is encountered here, as some of variables to parameterize for Experiment II contain only from to of the total energy.
The IM/FMT reduced systems are derived from Eq. (64) when the fluctuations are either taken with respect to the mean state at [corresponding to the point in Fig. 5(d)] (bottom row) or with respect to the closest steady state to (top row). Whatever the strategy retained, the IM/FMT reduced systems fail to predict the proper chaotic dynamics, predicting instead periodic dynamics for the dimension of the reduced state space than used for the OPM results shown in Fig. 5(B).
The IM/FMT reduced systems are derived from Eq. (64) when the fluctuations are either taken with respect to the mean state at [corresponding to the point in Fig. 5(d)] (bottom row) or with respect to the closest steady state to (top row). Whatever the strategy retained, the IM/FMT reduced systems fail to predict the proper chaotic dynamics, predicting instead periodic dynamics for the dimension of the reduced state space than used for the OPM results shown in Fig. 5(B).
Replacing by the genuine mean state does not fix this issue as some of variables to parameterize contain still a small fraction of the total energy, from to . The IM parameterization (D3) and the FMT one (D5) whose coefficients are now determined from the spectral elements of in which replaces in (D1), still fail in parameterizing accurately such small-energy variables; see bottom row of Fig. 7.112 By comparison, the OPM succeeds in parameterizing accurately these small-energy variables. For Experiment I, inaccurate predictions are also observed from the reduced systems built either on the IM parameterization (D3) or the FMT one (D5) (not shown).