Coupled, parametric oscillators are often studied in applied biology, physics, fluids, and many other disciplines. In this paper, we study a parametrically driven, coupled oscillator system where the individual oscillators are subjected to varying frequency and phase with a focus on the influence of the damping and coupling parameters away from parametric resonance frequencies. In particular, we study the long-term statistics of the oscillator system’s trajectories and stability. We present a novel, robust, and computationally efficient method, which has come to be known as an auxiliary function method for long-time averages, and we pair this method with classical, perturbative-asymptotic analysis to corroborate the results of this auxiliary function method. These paired methods are then used to compute the regions of stability for a coupled oscillator system. The objective is to explore the influence of higher order, coupling effects on the stability region across a broad range of modulation frequencies, including frequencies away from parametric resonances. We show that both simplified and more general asymptotic methods can be dangerously un-conservative in predicting the true regions of stability due to high order effects caused by coupling parameters. The differences between the true stability region and the approximate stability region can occur at physically relevant parameter values in regions away from parametric resonance. As an alternative to asymptotic methods, we show that the auxiliary function method for long-time averages is an efficient and robust means of computing true regions of stability across all possible initial conditions.

## I. INTRODUCTION

The study of coupled oscillators is ubiquitous in both applied and theoretical fields for their wide range of modeling capabilities,^{1–3} as well as their rich mathematical structure.^{4,5} In applied fields, qualitative analysis of an oscillator system’s trajectories or long-term statistics in the presence of varying parameters or initial conditions is of primary importance.^{6,7} The stability of coupled oscillators is also of great interest in many fields including engineering, quantum optics, and bio-chemistry.^{8–10} Moreover, in theoretical work, the presence of parametric terms prohibits classical, linear analysis, and novel analysis tools that are an interest of development. In this paper, we present a novel, theoretical method to study the long-term statistics of a parametrically driven, coupled oscillator system.

To motivate this study, we provide several, key applications of coupled oscillators. In the biological sciences, coupled oscillators have extensively been employed in the study of circadian rhythms, where individual circadian oscillations are generated in suprachiasmatic nuclei by an internal regulatory network.^{11,12} A general interest is to establish both when and how these individual oscillations become synchronized or out of synchrony,^{13} and key quantities, such as measures of synchrony, are realized as time averages of the underlying oscillator’s correlations.^{12,13} Hence, computing parameter and initial condition dependent time averages is of pivotal importance and frequently challenging. Coupled oscillators have also been explored in the neuroscience and neural network communities, respectively. In neuroscience, for example, the Kuromoto model has received widespread attention. Past and ongoing work focuses on the study of long-time average frequencies between individual oscillators and investigates how these individual oscillators become globally phase locked.^{14,15} Furthermore, in the study of neural networks, data-driven models attempt to construct potentially parametric dynamical models via data sampling.^{16–18} This requires a potentially large data set for the model to be trained on and one’s choice of data and assumptions about the form of the model have far reaching consequences on the stability of the underlying dynamics and the accuracy of model predictions. Additionally, coupled oscillators have received widespread attention in the work on fluids. For instance, extensive work has been done on the phenomena of phase lock-ins for vortex induced vibrations.^{19} The general interest is to establish which range of frequencies induces unstable and growing amplitudes for the underlying dynamics. Finally, there are a variety of applications in physics. Networks of optical parametric oscillators are being investigated as a way to simulate classical Ising (i.e., spin-$12$) systems. Such simulators have exciting potential applications in quantum computation.^{20} In addition, the behavior of coupled parametric oscillators has been used to study the response of microelectromechanical systems (MEMs).^{21} These examples provide only a small glimpse of the wide range of applications that these coupled oscillator systems have across a variety of scientific disciplines.

Frequently, a primary interest is in studying the effects of the oscillators coupling strength, the type of coupling, the phase differences between the oscillators, and the magnitude, frequency, and phase of the driving force on the long-term statistics of the system’s solutions.^{22–24} Although most real-world applications involve oscillators with differing phases or differing frequencies, most of the analysis that appears in the literature have ignored both of these essential features while also ignoring the effects of coupling strength, especially at frequencies away from parametric resonance.^{20}

Ignoring frequencies away from parametric resonance frequencies is natural if one is interested in studying phase-locking or if one has energy harvesting applications in mind.^{25–28} However, real-world systems typically operate over a broad range of parametric, modulation frequencies, and many engineering and biological applications including coupled oscillator systems are designed to operate away from parametric resonance to avoid excessive vibration, noise, and accelerate fatigue.^{29–31} The reason these essential features have primarily been ignored is almost exclusively due to the problem’s difficulty and the limitations of perturbative asymptotic methods.^{32}

From a computational perspective, direct numerical simulation (DNS) is incredibly expensive. Moreover, one is limited to the study of trajectories that are numerically stable, which crucially depends on one’s choice of initial conditions for parametric oscillators. Frequently, one is interested in unstable solutions, in particular, for the “control of chaos.”^{33} These concerns naturally pose the problems of which initial conditions to choose and a robust method to determine whether one’s findings are generically sensitive to initial data.

From a theoretical perspective, past and ongoing research often investigates stability regions within the parameter phase space,^{22,23,34} and without the knowledge of exact solutions, approximate solutions are frequently derived via perturbative, asymptotic methods in the form of finitely truncated, solution expansions, such as Fourier or eigenfunction expansions.^{35–39} The principal idea behind these truncated expansions is that a sufficiently small number of terms in the full expansion should serve as a viable solution approximation and, hence, all other terms may be disregarded; from hereafter, we refer to these disregarded terms as higher order terms. Deciding which terms to consider or disregard is often nontrivial, especially in coupled systems, and even given a fixed choice, the truncated solution approximation may still fail to be a viable approximation.^{40}

However, these asymptotic methods fail to capture the full range of the parameters in question by restricting attention to regions of phase space for which the modulation frequency is the same or an integer multiple of the resonance frequency.^{20,40} Moreover, as we demonstrate and explain in Sec. V, the effects of the oscillator’s coupling term can make the higher order terms non-negligible. As higher order terms are frequently neglected, asymptotic methods often forego when or how these higher order terms become relevant.^{41}

In this paper, we demonstrate that our proposed method, which has come to be known as an auxiliary function method for long-time averages, is incredibly robust in addressing the difficulties previously explicated. On computational grounds, our method is computationally efficient and foregoes the need to manually check or randomly select various initial conditions, such as in Ref. 42. In fact, our method determines extremal time averages across *all* initial conditions. On theoretical grounds, our method is robust even in the face of highly parametric, coupled dynamics and does not require approximate expansions or restrictions to only small parameter values. Therefore, we can recover regions of stability and long-term statistics without needing to decide on an asymptotic, approximate solution method, decide which higher order terms can be neglected, or attempt to determine if the higher order terms become non-negligible due to coupling effects.

We remark that the auxiliary function method for long-time averages has already appeared in the literature. In particular, the method has proven to be useful in bounding heat transport for truncated models of Rayleigh–Bérnard convection,^{43} bounding the mean squared amplitude over the Van der Pol limit cycle and bounding stationary ensembles in stochastic dynamical systems,^{44} and finding unstable periodic orbits for the purposes of dynamic control.^{45} We also note for the interested reader that there are extensions of this auxiliary function method to partial differential equations applications; see the work of Refs. 46–48 for explication.

All of the aforementioned literature is part of an expanding community concerned with computational, polynomial optimization.^{49–52} However, to the author’s knowledge, this paper is the first time the auxiliary function method for long-time averages has been employed in studying coupled, parametric oscillators as well as investigate the legitimacy of asymptotic stability analysis; this is our primary, novel contribution.

Our objective is to study a parametrically driven, coupled oscillator system with the auxiliary function method for long-time averages. We demonstrate the robustness of our auxiliary function method in computing essential long-time averages of the system’s trajectories, and we also demonstrate our method’s ability to determine regions of stability within the parameter phase space. We will also compare our results against the results predicted by perturbative asymptotic analysis. In particular, we investigate the effects of coupling terms on the system’s region of stability and the legitimacy of asymptotic methods in the presence of these coupling effects across a broad range of physically relevant modulation frequencies at and away from parametric resonance.

## II. AUXILIARY FUNCTION METHOD FOR LONG-TIME AVERAGES

To introduce the auxiliary function method for long-time averages, we focus on determining upper bounds for time averages of functions of the dynamical variable for autonomous ordinary differential equations (ODEs).

Consider $x(t)\u2208Rd$ satisfying

for $f:Rd\u2192Rd$ a continuously differentiable vector field. When there is no confusion, we denote $xi(t)$ and $fi(x)$ as the $i$th component of the vectors $x$ and $f(x)$, respectively, and we also make use of the notation $x\u02d9:=ddtx$.

For the purposes of the computation of long-term statistics, it becomes natural to ask which trajectories of Eq. (1) realize the optimal time average and their corresponding averaged values. For a general quantity of interest $\Phi (x)$, we define its long-time average along the trajectories $x(t)$ with $x(0)=x0$ by

As the underlying dynamics are of physical relevance, it is natural to assume that all trajectories $x(t)$ lie in a compact subset B of the phase space $Rd$. We are interested in the *maximal long-time average* among all trajectories (eventually) remaining in $B$, i.e.,

Upper bounds on averages can be deduced using the fact that time derivatives of bounded functions average to zero. This elementary observation follows from the fact that for every $V(x)\u2208C1(B)$—the set of continuously differentiable functions on B—we have

by the fundamental theorem of calculus. We hereafter refer to any such $V(x)\u2208C1(B)$ as an “auxiliary” function. Note that Eq. (4) holds for any auxiliary function, so there is an infinite family of functions with the same time average as $\Phi (x)$. In particular,

by the linearity of integration. Under the assumption that $B\u2282Rd$ is a compact region in phase space, one can deduce a chain of inequalities. Indeed, for any auxiliary function, one obtains an upper-bound on $\Phi \xaf$ by bounding the right-hand side of the last line in Eq. (5) point-wise on $B$

since a point-wise maximum in time is certainly greater than the long-time average.

As the bound in Eq. (6) holds for any $x0\u2208B$, one can subsequently maximize the left-hand side over initial data $x0$,

Moreover, since the upper bound in Eq. (7) holds for any $V(x)\u2208C1(B)$, the sharpest upper bound on $\Phi \xaf\u2217$ is then

but we note that minimizers need not exist. However, we also note that the right-hand side of Eq. (8) is convex in the auxiliary function $V$. Indeed, define the functional $F:C1(B)\u2192R$ as

and insert a convex combination of auxiliary functions, $V1$ and $V2$, and apply the triangle inequality to deduce

where we have suppressed the $x$ argument for the functions $\Phi (\u22c5)$, $f(\u22c5)$, and $V(\u22c5)$ for ease of notation.

The remarkable fact is that the inequality in Eq. (8) can actually be made an equality. The details of the proof can be found in Ref. 53, but the key ideas are that maximizing time averages can be realized as maximizing phase space averages against invariant measures via Birkhoff’s Ergodic Theorem; maximizing phase space averages over invariant probability measures can be realized as a Lagrange multiplier problem, where the auxiliary function $V$ can be interpreted as a Lagrange multiplier; finally, the resulting max-min problem can be written as a min-max problem using a variety of abstract min-max theorems over infinite dimensional spaces.

Note that the bound $\Phi \xaf\u2217\u2264U$ follows from Eq. (7) if

and if one restricts to polynomial dynamics, then the problem reduces to a convex optimization problem subject to a local, non-negative polynomial constraint.

When B is a semi-algebraic set,^{54} one can augment the constraint in Eq. (10) with polynomials that define B; this is called the S-procedure. The non-negative polynomial constraint in Eq. (10) can then be replaced with a sum of squares^{55} constraint, which has an equivalent semi-definite program formulation. In the resulting semi-definite program, the polynomial degree of the auxiliary function $V$ is a degree of freedom, where increasing the degree of $V$ improves the sharpness of the computed bounds. When increasing the degree of $V$ no longer improves the bound, we say the bounds are sharp, at least to numerical precision; see Ref. 53 for discussions and reviews on sum of squares programs via semi-definite programming.

One should note that the above formulation is completely dependent on the original dynamical system in Eq. (1) being autonomous. For our purposes, the model in question will have parametric trigonometric dependence. Therefore, the dynamics generically take on the form

The canonical way of making systems of the form seen in Eq. (11) autonomous is to introduce an additional coordinate $xd+1=t$ and extend the system dimension from $d$ to $d+1$. However, this method unfortunately introduces an unbounded dependent variable while retaining non-polynomial dependence on it. This problem can be circumvented by introducing *two* new dynamical variables satisfying the polynomial sub-system

One should note that equations of the form in Eq. (12) are frequently referred to as differential-algebraic systems—that is, a differential equation subject to an algebraic constraint. In order to write the system in Eq. (12) as a standard dynamical system, the form seen in Eq. (11), one needs to omit the origin as a potential solution either via assumption or other techniques.

Additionally, one can also formulate equivalent autonomous polynomial dynamics for both quasiperiodic and substantially more complex $2\pi \omega $-periodic time dependencies in the vector field. Employing a new pair of dynamical variables like those in Eq. (12) for each independent frequency allows for quasiperiodic time dependence, at least for quasiperiodicity involving only a *finite* number of independent frequencies. Other $2\pi \omega $-periodic time functions can be expressed as finite linear combinations of $cos(n\omega t)$ and $sin(n\omega t)$, each of which, in turn, is a finite polynomial combination of $cos(\omega t)$ and $sin(\omega t)$. The overall order of the dynamical system necessarily increases but autonomous polynomial dynamics are still sufficient to capture the systems’ dynamics; see Ref. 56 for further details.

There are a few requisite remarks that need to be made in view of this auxiliary function method. The first is that this auxiliary function method computes the maximal long-time average of trajectories across *all* initial conditions, but it does not give the knowledge of *which* initial conditions or trajectories attain the maximal long-time average. If one were interested in the analysis of the particular solution that achieves the maximal long-time average, other methods would need to be developed. This is perhaps one potential drawback of the method. However, in many applications, the knowledge of these extremal averages is sufficient for the study at hand. This method is particularly well-suited for systems subject to random initial conditions and when long-time statistics are of the utmost importance.

## III. MODEL SETUP

Our general model of interest is a parametrically driven, coupled oscillator system of the form

In Eq. (13), $\omega 0$ denotes the proper frequency of the oscillators and $g$ is the intrinsic loss term, which is taken to be equal for both oscillators for the sake of simplicity. The $h$ and $\gamma $ terms are the intensity and frequency of the parametric stiffness terms, respectively. Throughout the remainder of the paper, frequencies such as $\gamma $ are given in units of $\omega 0$.

We focus on $h\u2208[0,1]$ and $\gamma \u2208[\u22123,3]$, as this is the regime which encapsulates most physically relevant phenomena. The coupling strength $r$ describes an energy preserving coupling between the oscillators, which corresponds to rotations in the $(x1,x2)$ plane and preserves the system’s total energy. We note that in the limit as $r\u21920$ that the system in Eq. (13) becomes equivalent to two, decoupled parametric oscillators described by two, damped Mathieu equations. Finally, $\varphi $ denotes the phase difference between the oscillators. Just as in Ref. 20, we will focus on the cases $\varphi =0,\pi 2,and\pi $, and we remark that the system in Eq. (13) for the aforementioned $\varphi $ values exhibits three potential, resonance frequencies at $\gamma =2\Omega r$ and $\gamma =2\Omega r\xb1\omega 0r$, where $\Omega r=\omega 01+(r2\u2212g2)/4$.

For the purposes of the auxiliary function method for long-time averages, the system in Eq. (13) can be written as a first order, coupled system with exclusively polynomial terms,

where $\varphi =0$. The system in Eq. (14) can be written similarly for additional values of $\varphi $ via elementary trigonometric identities. The constraint $x32+x42=1$ is enforced via the S-procedure and forces the variables $x3$ and $x4$ to be uniquely determined. However, we remark that the S-procedure is not enforced when $\gamma =\varphi =0$, as Eq. (14) becomes independent of trigonometric terms. We employ the auxiliary function method for long-time averages on the quantity of interest

Hence, computing $\Phi \xaf\u2217$ gives the maximal long-time average of the summed, mean squared amplitudes, which allows us to distinguish between small and large amplitude, parameter dependent solutions. The intuition being that small or large amplitude solutions should generically correlate with asymptotically stable or unstable solutions, respectively. Since the maximal long-time averages will be either zero or infinite in the regions of stability and instability, respectively, the auxiliary function method will allow us to trace out the threshold between stability and instability. The fineness of the threshold, moreover, only depends on one’s mesh size for the system parameters.^{57} With $\Phi $ as defined in Eq. (14), the sum of squares program to solve becomes

where $f(x1,x2,y1,y2,x3,x4)$ is the polynomial vector field as defined in Eq. (13), $V$ is an auxiliary function polynomial whose degree may vary, and we have enforced the S-procedure at all $\gamma $ values other than zero. The convex optimization problems in this paper were solved using Mosek^{58} paired with Yalmip,^{59} and for the interested reader, one can find the implementation of Eq. (16) as well as code relating to the Duffing equation on the lead author’s webpage.

We remark that similar computations using the auxiliary function method have already appeared in Ref. 56, where the model of interest was the Duffing oscillator. In particular, it was shown that the auxiliary function method accurately reproduces the Duffing equation’s frequency response curve and parameter dependent hysteresis phenomena as derived by harmonic balance down to numerical precision. This further validates the auxiliary function method as a tool to study oscillator dynamics.

To the author’s knowledge, this is the first time that the auxiliary function method has been applied to studying parametric, coupled oscillators or has been used to explicitly study dynamical stability. Moreover, as we suspect and as we will show, the dynamics of Eq. (13) are substantially more complicated than that which can be described by asymptotic methods. The reason is that in contrast to the Duffing equation, our model has two *coupled*, parametric oscillators and the non-autonomous forcing appears parametrically instead of externally. Hence, the synchronization of the two oscillators plays a crucial role in their stability and that synchronization crucially depends on one’s choice of system parameters.

## IV. ASYMPTOTIC ANALYSIS

We will corroborate the findings of the auxiliary function method for long-time averages via comparison with the approximate solutions established by the Floquet theory. This is a standard method that is discussed by many sources on asymptotic analysis; the specific reference used here is Ref. 60.

Consider $x(t)\u2208R$ satisfying

where $F(t)$ is a periodic function with period $T$. Equation (17) is a second order differential equation and, thus, has two linearly independent solutions $x1(t)$ and $x2(t)$.

Since $x1(t)$ and $x2(t)$ are linearly independent, they span the solution space of Eq. (17). In particular, there must exist $M\u2208R2\xd72$ such that

where $M$ is called the *monodromy matrix* of $x1(t)$ and $x2(t)$. A monodromy matrix can be constructed for any pair of linearly independent solutions of Eq. (17). One can show that the determinant of the monodromy matrix is $1$ and eigenvalues of the monodromy matrix are independent of the pair of solutions that one chooses to start with.

Suppose $\lambda 1$, $\lambda 2$ are eigenvalues of a monodromy matrix $M$ corresponding to solutions $x1(t)$ and $x2(t)$. For each $\lambda j$, there exists a linear combination $x\lambda j(t)$ of $x1(t)$ and $x2(t)$ such that

*Floquet’s theorem* states that we may write

where $u\lambda j(t)$ is periodic with period $T$ and $\mu j\u2208C$ is such that $\lambda j=e\u2212i\mu jT$. Since $u\lambda j(t)$ is periodic, one may expand $u\lambda j(t)$ as a Fourier series and recover the analytic solution $x\lambda j(t+T)$ to within arbitrary accuracy by recursively solving for the Fourier amplitudes.

For Eq. (13) with $\varphi =0$, we generally follow the method outlined in Ref. 20, where the study focused on frequencies at or near parametric resonance due to an interest in phase-locking, but our novel contribution is that we do not limit our attention to resonant frequencies.

The equations in Eq. (13) can be decoupled by performing a change of basis and defining

Adding the first line of Eq. (13) to $\xb1i$ times the second line yields

where the solutions in the new basis $x\xb1(t)$ exhibit real and imaginary loss terms. We then let

for $y\xb1(t)$ to be determined. Upon substituting Eq. (23) into the decoupled system of Eq. (22), we arrive at

Since the coefficient of $y\xb1$ in Eq. (24) is periodic with period $T=2\pi \gamma $, then according to the Floquet theory, we may write

where $f\xb1(t)$ are some periodic functions with period $T=2\pi \gamma $. Wherever $Im(\mu )>0$, $y\xb1(t)$ will be unstable as $t\u2192\u221e$. Since $f\xb1(t)$ are periodic, we may express them as Fourier series

where $An\xb1$ denotes the amplitude of the $n$th Fourier coefficient for $f\xb1(t)$, respectively. If we substitute Eq. (26) into Eq. (24) and collect the coefficients of $ein\gamma t$, we get the recursion relation for the Fourier coefficients $An\xb1$ as derived by Ref. 20,

where $Dn(\mu )=\omega 02\u2212(n\gamma \u2212\mu )2+i\omega 0(g+ir)(n\gamma \u2212\mu )$. We note that if the driving intensity $h$ is less than one, $An\xb1$ is coupled most strongly to $An\xb11\xb1$, with coupling proportional to $h$. This is because the Fourier amplitudes $An\xb1$ are coupled to $An\xb1m\xb1$, with $m>0$, with an intensity on the order of $hm$. Without loss of generality, we will consider $n=0$. Therefore, one recovers a matrix equation of the form

where one defines

Note that in Eq. (28) that enforces the determinant to vanish ensures that there are non-trivial solutions to Eq. (24). In contrast, it was also shown in Ref. 20 that the simplifying assumptions of $r=g=0$ and ignoring the coupling effects of $D+1$ yield the matrix equation

The matrix equation in Eq. (30) is a consequence of assuming the coupling effects as well as the higher order effects are negligible in the asymptotic analysis. Hence, we hereafter refer to the solutions of Eq. (30), which is the same as presented in Ref. 20, as the simplified, uncoupled solution.

Similar to Ref. 20, we focus on the solutions of Eq. (28) display parametric resonance at $\gamma =2\Omega r,2\Omega r\xb1\omega 0r$ and the solutions of Eq. (30) that display parametric resonance at $\gamma =2\Omega r$; we present the solutions in Sec. V. We show that the simplifying assumptions have highly non-trivial effects on the system’s regions of stability at and away from parametric resonance frequencies.

## V. RESULTS

We first compare the stability region of the results established via Eq. (28) with the stability region of the simplified, uncoupled solution of Eq. (30). We then compare the stability regions as predicted by asymptotic analysis with the stability region as predicted by the auxiliary function method for long-time averages.

### A. Influence of higher order effects

Indeed, we employ MATLAB’s Solve function to find the roots in $\mu $ of Eq. (28), whose solutions are sufficiently complicated and long to not be included in the text;^{61} This yields six solutions. In Fig. 1,^{62} we plot the contours of the imaginary part of one of the combined boundaries of the six solutions of our more generalized result in comparison to the simplified, uncoupled solution to Eq. (30) for a wide range of parameter values:

In Fig. 1, we see that the more general, higher order asymptotic analysis solution is a dramatic and surprising improvement on the simplified, uncoupled solution, as the region of instability for the simplified, uncoupled asymptotic result is a subset of the region of instability for the more general asymptotic result. While there is good agreement between the two solutions for $|\gamma |>1$, there is a large region within the parameter space, $|\gamma |\u22641$, for which the simplified, uncoupled asymptotic analysis solution predicts stability, while the more general, higher order solution reveals instability.

The fact that the simplified, uncoupled asymptotic expansion stability solution fails so drastically for $|\gamma |\u22641$ is quite a surprising finding. This means the simplified solutions are dangerously un-conservative in predicting regions of stability. However, it then becomes natural to ask if the more general, higher order solution also fails to fully describe the stability region of the solutions to Eq. (13). That is, we may ask if including successively higher order terms in the asymptotic expansion would lead to such drastic changes in the stability region as seen between the two solutions in Fig. 1. In order to address this question, we compare the stability regions as predicted by the general asymptotic analysis with the stability regions predicted by the auxiliary function method for long-time averages.

### B. Asymptotic analysis vs the auxiliary function method

Upon computing the stability region with the auxiliary function method for long-time averages using the expression in Eq. (15), we arrive at the solid black line seen in Fig. 2.

In Fig. 2, we see that the more general asymptotic analysis result agrees quite well with the results of the auxiliary function method. In particular and most noticeably, both methods capture a pair of narrow, protruding tongues that occur at $\gamma \u2208{\u22121,1}$. However, there is still a quite large range of $\gamma $ values for which the auxiliary function method predicts potential instability, but the asymptotic analysis solution to Eq. (28) predicts stability.

In order to validate the extended region of instability as predicted by the auxiliary function method for long-time average results, we choose four points within the parameter space to perform direct numerical simulation using ode45 in MATLAB. The results show that the auxiliary function method for long-time averages is corroborated via direct numerical simulation.

That is, for points marked by red crosses inside the region of instability, there is a blow up of the solutions, with the time histories shown on the right-hand side of Fig. 3. However, for points marked by open blue circles just outside the instability region experience decay, with the time histories shown on the left-hand side of Fig. 3. In the Appendix, we include additional points across the parameter phase space for which solutions are stable or unstable as predicted by direct numerical simulation in order to further validate our results.

Additionally, Fig. 3 displays three trajectories for three randomly chosen initial conditions for each of the four chosen points in parameter space. We also note that the above figure only plots the trajectory of $x1(t)$ for our model.

This is perhaps a case study example of how the auxiliary function method can lend itself to finding regions of instability where perturbative methods fail. Strikingly, the perturbative method for this system seems to fail quite drastically with a large portion of the stability diagram not being captured by the perturbative method but instead by the auxiliary function method. Therefore, the auxiliary function method has the advantage of being able to recover the true stability region both at and away from parametric resonance, while doing so in a computationally efficient way. We remark that the auxiliary function method is quite computationally efficient despite the polynomial representation, seen in Eq. (14), containing five degrees of freedom. In particular, the CPU times corresponding to the computation of $\Phi \u2217\xaf$ at the points $(\gamma ,h)=(\u22121.13,0.650),(\u22121.00,0.650),(\u22120.660,0.390)$, and $(\u22120.660,0.490)$ are 3.4844, 2.6744, 2.8574, and 2.5737 s for a degree 8 auxiliary function and a degree 6 S-procedure enforcement on nothing more than a standard laptop using a single core, 2.2 GHZ processor.^{63}

In light of the above results, it then becomes natural to analyze how varying the system parameters changes the above regions of stability. In particular, we study the effects of varying the coupling strength in Sec. V C.

### C. Effects of damping and coupling parameters

In order to study the effects of varying the coupling strength, we consider Eq. (21) for $r=0.4$ and compare the stability region with the prior results of both the auxiliary function method and the asymptotic analysis for $r=0.2$ and $g=0.01$. We do not consider negative values for $r$ because the coupling term appears negatively in the first equation and positively in the second equation of Eq. (21); hence, a sign change of $r$ just swaps the role of $x1(t)$ and $x2(t)$, respectively. Also, we focus on $\gamma \u2208[\u22121.5,1.5]$, as this is where the discrepancies between the auxiliary function method and asymptotic analysis were established in Sec. V B.

Performing the same procedure with the auxiliary function method as in Sec. V B as well as incorporating the predictions of the asymptotic analysis, we find in Fig. 4 that the region of stability for the equations with $r=0.2$ is a subset for the region of the stability for the equations with $r=0.4$. Intuitively, one would expect this to be true because as the r-value increases, the rate of energy transfer from one oscillator to another increases and, hence, initial transients are far less likely to become unstable.

Figure 4 shows that the discrepancies between the auxiliary function method and the asymptotic analysis solution for $r=0.4$ persist. Moreover, the discrepancies seem to grow as the value of $r$ is increased. Therefore, as the coupling strength between the oscillators increases, the asymptotic analysis solution to Eq. (28) increasingly fails to capture the true stability region of the system, because the coupling terms amplify the relative importance of the higher order terms, particularly away from parametric resonance as the sine and cosine terms do not cancel. We also note that Fig. 4 is for $g=0.01$, but if one is interested in the affects of $g$, we can similarly compute regions of stability for fixed $r$ and different g values. The expected result would simply be a shift up of the region of stability appearing in Fig. 1. The reason is that $g$ appears on the damping term and, hence, increasing the damping coefficient should increase the size of the region of stability.

### D. Effects of the parametric phase

Finally, we study the effects of varying the phase value $\varphi $. In a fashion similar to Ref. 20, we focus on $\varphi =0,\pi 2,\pi $. However, it is important to remark that Strinati *et al.*^{20} was unable to determine a closed form, asymptotic solution for varying $\varphi $. We also could not find a closed form solution to the analog of Eq. (28) with $\varphi \u22600$. The ability to find the stability region for general system parameters proves to be another benefit of the auxiliary function method. Indeed, by performing the analogous computations via the auxiliary function method and choosing the same $\Phi $ that appears in Eq. (13), we arrive at Fig. 5. Figure 5 shows that the auxiliary function method reproduces the protruding tongues around $\gamma =\xb12\Omega r$ and $\gamma =\xb12\Omega r\xb1\omega 0r$ with $\varphi =0$, $\varphi =\pi $, and $\varphi =\pi 2$ corresponding to one, two, and three tongues, respectively. Moreover, note that varying $\varphi $ has non-trivial effects throughout the entirety of parameter space, and not only about $\gamma =\xb12\Omega r,\xb12\Omega r\xb1\omega 0r$, as varying the phase of the parametric oscillator term can lead to more localized tongue formation.

Since the phase of a parametric oscillator can often be random, it is important to identify the instability region when varying phases are taken into consideration, especially when there is non-trivial disagreement between the true stability region and the region as predicted by asymptotic analysis.

## VI. CONCLUSION AND DISCUSSION

In summary, we have investigated the higher order effects caused by coupling parameters on the stability region of a parametrically driven, coupled oscillator system across a broad range of modulation frequencies. We have shown that the stability region of a parametrically driven oscillator system as predicted by simplified, second order asymptotic methods ignoring the coupling terms between the oscillators can differ quite substantially at modulation frequencies away from parametric resonance frequencies. The simplified, asymptotic solution is un-conservative compared to a full, second order asymptotic solution when coupling terms are not ignored, which can, in turn, differ and still be non-trivially un-conservative in comparison to the true stability region at modulation frequencies away from the parametric resonance frequencies. The differences are caused by both neglecting the coupling terms and higher order effects.

This is a primary drawback of critical importance for asymptotic methods. That is, the validity of the asymptotic results depends crucially on one’s choice of the approximations. However, there is currently no way to know which order will be sufficient to capture the true instability region, which depends on the strength of the coupling terms, $g$ and $r$, as well as the phase of the parametric oscillator term $\varphi $. The solution is expected to be even more sensitive to these system parameters if the resonance frequency of the individual oscillations is different or if nonlinear terms are present.

Hence, we have shown that the auxiliary function method for long-time averages is an efficient and robust means of computing the true long-time averages and true regions of stability across all possible initial conditions without the need of ad hoc approximations. Moreover, this auxiliary function method has the advantage of being able to compute regions of stability both at and away from parametric resonance.

The differences between the true stability region and the approximate stability region are immaterial if one is operating within a region of parameter space where both boundaries agree. However, if one is operating within a region of parameter space for which they disagree, this may be quite problematic for both experimental and real-world implementation—especially without the knowledge of operating within one of these regions of disagreement. This point is exacerbated by the fact that, at least in studying Eq. (13), we discovered two, very narrow protruding tongues in parameter space for which the system was potentially unstable. Moreover, these tongues occur at very naturally occurring, non-pathological values of the driving frequency $\gamma $.

In the context of applications and future research directions, our results have profound implications on the reliance of asymptotic methods across a variety of applied sciences. Our results also illuminate future research directions with regard to the applicability of the auxiliary function method for long-time averages.

For machine learning and environmental modeling applications, one is often interested in establishing a dynamical system via data-driven methods,^{64–67} and many loss functions for the training of these machine learning models can be expressed in terms of long-time averages. This poses a potential connection and future direction of research between the training of machine learning models and this auxiliary function method. Moreover, if data sampling is performed on a relatively sparse, spatial or temporal grid, there may be narrow and protruding regions of parameter space, such as in Fig. 2, for which the dynamics are unstable, and utilizing asymptotic methods may fail to accurately predict the underlying stability. Hence, we recommend this auxiliary function method as a viable method for checking system stability.

For neuroscience and biological applications, studying synchrony behavior or global phase locking of neuronal firing or circadian rhythms via asymptotic methods may fail to capture sensitivity to a system’s parameters. Moreover, with measures of synchrony being realized as time averages of the underlying oscillator’s correlations,^{12,13} this auxiliary function method may prove to be an indispensable tool in both capturing dynamic sensitivity to a system’s parameters and concretely computing synchrony measures.

For ecological applications, many ecologists have been trying to determine mechanisms that can stabilize ecosystems and support the biodiversity observed empirically as well as investigate the reactivity and subsequent stability of an ecological system.^{68,69} With previous work done on the applicability of this auxiliary function method to control problems^{45} and with our results establishing this method’s applicability to determining regions of stability, it may be that this method helps to illuminate these ecosystem stabilizing mechanisms as well as serve as an effective computational tool for determining parameter dependent stability.

Hence, one can see the broad and diverse applicability of this auxiliary function method in various scientific disciplines. Moreover, as we showed, one can also see the potential dangers of relying on asymptotic methods in determining stability. Therefore, it is recommended that the instability region be determined and considered when planning new experiments or full-scale implementation, as disregarding the stability region or only relying on asymptotic methods may have costly consequences.

## DEDICATION

We are greatly indebted to Dr. Charles Doering (1956–2021)—a great mentor, friend, and colleague. His insights, commentary, and suggestions immensely impacted this work, and the entire scientific community will continue to miss his absence.

## ACKNOWLEDGMENTS

Support for this research is provided by the U.S. Office of Naval Research, Contract No. N00014-18-C-1025 managed by Ms. Deborah Nalchajian.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

## DATA AVAILABILITY

The MATLAB code that support the findings of this study are openly available in supplementary code at https://sites.google.com/umich.edu/andrewmcmillan/research, Ref. 3.

### APPENDIX: FURTHER NUMERICAL VALIDATION OF STABILITY REGIONS

In order to further corroborate our regions of stability findings via the auxiliary function method for long-time averages, we include an additional, analogous figure to that of Fig. 2 with greater data sampling across the parameter phase space. We make vertical cuts across the parameter phase space with 0.05 spacing, and we make horizontal cuts across the parameter phase space with 0.25 spacing; this results in Fig. 6. The blue circles and red crosses in Fig. 6 were determined in an identical fashion to those displayed in Fig. 2. The DNS is performed using ode45 in MATLAB for randomly chosen initial conditions of Eq. (13), and we see that there is strong agreement between the stability as predicted by the auxiliary function method for long-time averages and DNS.

## REFERENCES

*Proceedings of the 2002 IEEE International Frequency Control Symposium and PDA Exhibition (Cat. No. 02CH37234)*(IEEE, 2002), pp. 580–583.

*7–14 March 2020*, (

*[1992] Proceedings of the 31st IEEE Conference on Decision and Control*(IEEE, 1992), Vol. 4, pp. 3025–3030.

*Partial Differential Equations & Boundary Value Problems with Maple*, 2nd ed., edited by G. A. Articolo (Academic Press, Boston, 2009), pp. 409–476.

*Higher Order Asymptotics*, NSF-CBMS Regional Conference Series in Probability and Statistics Vol. 4 (Institute of Mathematical Statistics, 1994), p. i-111.

*in High Performance Optimization,*edited by H. Frenk, K. Roos, T. Terlaky, and S. Zhang (Springer US, Boston, MA, 2000) pp. 197–232

*IEEE International Conference on Robotics and Automation*(IEEE, 2004).

*Applied Asymptotic Analysis*, Graduate Studies in Mathematics (American Mathematical Society, 2006).

*2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)*(IEEE, 2018), pp. 462–469.