Deep brain stimulation (DBS) is a commonly used treatment for medication resistant Parkinson’s disease and is an emerging treatment for other neurological disorders. More recently, phase-specific adaptive DBS (aDBS), whereby the application of stimulation is locked to a particular phase of tremor, has been proposed as a strategy to improve therapeutic efficacy and decrease side effects. In this work, in the context of these phase-specific aDBS strategies, we investigate the dynamical behavior of large populations of coupled neurons in response to near-periodic stimulation, namely, stimulation that is periodic except for a slowly changing amplitude and phase offset that can be used to coordinate the timing of applied input with a specified phase of model oscillations. Using an adaptive phase-amplitude reduction strategy, we illustrate that for a large population of oscillatory neurons, the temporal evolution of the associated phase distribution in response to near-periodic forcing can be captured using a reduced order model with four state variables. Subsequently, we devise and validate a closed-loop control strategy to disrupt synchronization caused by coupling. Additionally, we identify strategies for implementing the proposed control strategy in situations where underlying model equations are unavailable by estimating the necessary terms of the reduced order equations in real-time from observables.

High frequency deep brain stimulation (DBS) is a well-established treatment for Parkinson’s disease. In most clinical settings, DBS is implemented by injecting 130–180 Hz pulse trains of electrical current into an appropriate brain region. Recent evidence suggests that phase-specific adaptive deep brain stimulation (aDBS), whereby the application of stimulation is locked to a particular phase of tremor, may improve therapeutic efficacy and reduce side effects. Motivated by these preliminary aDBS control strategies, this work focuses on understanding the dynamical behavior of a large population of coupled neurons in response to a nearly periodic input. Here, we apply phase-amplitude-based coordinate reduction strategies to identify a low-dimensional model and subsequently devise and validate a closed-loop control strategy to manipulate the aggregate behavior of the neural population. This work can be viewed as potential theoretical underpinning for the phase-specific adaptive DBS strategies that have gained interest in recent years.

Deep brain stimulation is a well-established treatment for medication resistant Parkinson’s disease.4,37 DBS has also shown promise as a treatment for other neurological disorders, such as epilepsy,25 depression,31 tinnitus,56 and Tourette syndrome.53 While the fundamental mechanisms of DBS are not fully understood, one prevalent hypothesis postulates that DBS disrupts pathological synchronization among neurons in the basal ganglia-cortical loop associated with the motor symptoms of Parkinson’s disease.5,17,20,24 This hypothesis has led to the development of a variety of electrical stimulation techniques designed to desynchronize populations of neurons. Coordinated reset28,57 is one such strategy, whereby high frequency stimulation is coordinated at different sites to desynchronize a population of oscillators that are distributed in space. The coordinated reset strategy has been shown to produce long lasting therapeutic effects in both animal and human trials.1,60 Other methods for achieving neural desynchronization have focused on driving neurons to a phaseless set so that they desynchronize during the relaxation to their nominal limit cycles,36 identifying energy optimal stimuli that achieve positive Lyapunov exponents and produce chaotic desynchronization,64,71 controlling neurons to a prespecified phase distribution,34 and separating neurons into distinct clusters.12,29,72

In a clinical setting, DBS is usually applied by injecting high frequency (130–180 Hz) pulse trains of electrical current into an appropriate brain region.21,59 While DBS is administered in an open-loop fashion in most cases, there has been a growing interest in developing demand-controlled and adaptive methods for applying DBS both to increase therapeutic efficacy and reduce the emergence of side effects. For instance, measured local field potentials (LFPs) have been used as a control variable with the goal of keeping the LFP power spectrum close to that of the tremor-free condition.32,44,46 While most adaptive DBS (aDBS) strategies are fairly simple from a control theoretic perspective, generally turning off or on depending on the value of the chosen biomarker relative to some threshold, other experimental7,18,49 and computational studies3,10,19 have shown that timing the application of DBS input relative to the phase of a patient’s tremor rhythm can also be an effective closed-loop stimulation strategy. Indeed, phase-amplitude reduction strategies can be used to predict how phase-triggered stimulation will influence the amplitude of pathological oscillations63 that are thought to contribute to the symptoms of Parkinson’s disease; this information can subsequently be used to develop stimulation protocols for suppressing oscillation amplitude.9,62 Additionally, slow modulation of the magnitude of the DBS input has been shown to be an effective aDBS strategy. For instance, increasing or decreasing the strength of the DBS pulses according to the beta band LFP power46 or the severity of tremor27 has been successfully demonstrated in human trials. Computational studies have also suggested that the magnitude of the DBS input could be controlled using a variety of delayed-feedback approaches;41–43 related delayed-feedback approaches consider non-pulsatile inputs.39,47

Motivated by these preliminary aDBS control strategies, this work focuses on understanding the dynamical behavior of a large population of coupled neurons in response to a nearly periodic input. By nearly periodic, we mean an input that is nominally periodic (for example, as is the case for conventional DBS) with the addition of a slowly varying amplitude and phase offset that can be used to coordinate the timing of an applied input with a specified phase of the population oscillation. As illustrated in the results to follow, these additional degrees of freedom allow for the development of a closed-loop control strategy that can be used to manipulate the phase cohesion (i.e., synchronization) of a large population of neural oscillators as measured by the Kuramoto order parameter. Additionally, we identify and validate a strategy for estimating the necessary terms of the proposed closed-loop control framework in real-time so that it can be implemented without knowledge of an underlying dynamical model.

The organization of this paper is as follows: Sec. II provides necessary background on the phase-amplitude reduction frameworks used for the theoretical analysis in this work. Section III derives a reduced order model to characterize the behavior of a large population of noisy, coupled, conductance-based neural oscillators. Here, we employ the recently developed adaptive phase-amplitude reduction framework67,68 in conjunction with formal averaging techniques16,51 to arrive at a low order, nonlinear model describing population-level behaviors of the neural oscillators. Section IV describes a closed-loop control strategy that can be used to manipulate the phase cohesion and disrupt synchronization in population-level oscillations. Additionally, we propose a data-based strategy to estimate the terms of the associated reduced order model in real-time. Theoretical results are validated by numerical simulations with results presented in Sec. V for a Fokker–Planck representation of the dynamics of a coupled oscillator population and for a full model of conductance-based neurons. Section VI provides concluding remarks.

Our primary focus in this work is on the analysis of coupled, noisy, conductance-based neural oscillators in response to near-periodic forcing. The general model is described below:

CV˙i=Iion(Vi,ai)g0Nj=1N[sj(tτ)(ViEsyn)]+Istim(t)+2Dηi(t),s˙i=c1(1si)1+exp((ViVT)/σT)c2si,a˙i=fa(Vi,ai),i=1,,N.
(1)

Above, N sets the number of neurons in the population, C is a constant membrane capacitance, Vi is the transmembrane voltage of neuron i (in mV), aiRNa is a collection of auxiliary variables (gating variables, ionic concentrations, etc.) with dynamics governed by fa, Iion is a collection of ionic currents, si is a synaptic variable where c1,VT,σT, and c2 determine the dynamics of the synaptic variable, Esyn=100 mV is the reversal potential of the neurotransmitter so that the coupling is inhibitory, τ is a constant time delay, g0>0 is a constant conductance that sets the synaptic coupling strength, Istim(t) is an externally applied current that is applied identically to each neuron, and ηi(t) is a zero-mean white noise process associated with neuron i for which D=1μA2/cm4 sets the intensity. Here, the collection η1(t),,ηN(t) is independent and identically distributed. The specific forms of Iion and fa are model dependent. All neurons in Eq. (1) are assumed to be identical. Because DBS is often applied to the subthalamic nucleus,17 for concreteness, we will use a model of thalamic neurons from50 to specify the ionic dynamics. These model equations are given in the  Appendix. Terms governing the synaptic variables are taken to be c1=3,VT=20mV,σT=0.8mV,c2=1. Inhibitory synaptic coupling is chosen because it synchronizes the population in the absence of input. We emphasize that due to the lack of heterogeneity between neurons, the simple all-to-all coupling structure, and the lack of coupling between upstream and downstream neural populations, Eq. (1) is not meant to represent a specific target brain region for DBS. Rather, our goal is to address challenges associated with desynchronizing a population of pathologically synchronized neurons from both a dynamical systems and control theoretic perspective.

In this work, we will adopt the hypothesis that pathological synchronization among neurons in the basal ganglia-cortical loop of the brain contributes to the motor symptoms of Parkinson’s disease,5,20,24 and that therapeutic DBS disrupts this synchronization. Consequently, we will consider the identification of control strategies using Istim(t) to desynchronize a nominally synchronized population of neurons from Eq. (1). Our focus in this work is on the study of the overall population response of (1) to nearly periodic inputs Istim(t); the equations governing Istim will take the general form

Istim(t)=μfs(t+ζTs/2π),ζ˙=ϵfζ(t),μ˙=ϵfμ(t),
(2)

where fs(t) is a Ts-periodic input with amplitude μ and phase offset ζ. Both ζ and μ are allowed to change slowly according to the relations fζ and fμ, respectively. The input Istim(t) is nearly Ts-periodic, where the magnitude and phase shift are allowed to slowly vary in time. Further structure for the terms fζ and fμ will be considered in the analysis to follow.

Figure 1 shows the behavior of a population of N=1000 synaptically coupled neurons from Eq. (1) taking the synaptic coupling time delay to be τ=4 ms. Initially, the coupling strength g0 is set to zero allowing noise to fully desynchronize the population. At t=100 ms, the coupling strength is set to g0=0.35mS/cm2 yielding synchronized oscillations with a period of approximately 10.9 ms. During this simulation, panel (a) shows the transmembrane voltage of a representative sample of neurons (colored traces) along with the average voltage (black line). Panel (b) shows the average value of time-delayed synaptic variable,

s¯(t)=1Nj=1Nsi(tτ).
(3)

The average synaptic variable will be used as a model observable in the development of the control strategies in Secs. IIIV. In Fig. 1, the emergence of synchronization can clearly be seen from the individual neural traces in panel (a). In order to quantify the level of synchronization, we will consider the Kuramoto order parameter

r(t)=|1Nj=1Neiθj(t)|,
(4)

where θj is the asymptotic phase of neuron j [as defined by the isochrons according to Eq. (8)]. Intuitively, r takes values between 0 and 1 and gives a sense of the overall level of phase coherence. r1 corresponds states that are more synchronized, i.e., with a probability distribution that is well approximated by a delta function. A fully desynchronized splay state, having a uniform distribution, returns r=0. While the Kuramoto order parameter is not a perfect measure of synchrony (for example, rotating block solutions can also have an order parameter of 0), this metric is sufficient for the purposes of this work to quantify the level of synchronization of a distribution of oscillators. The order parameter associated with the simulations shown in Fig. 1 is shown in panel (c). Indeed, before coupling is applied r(t) starts near zero and grows to r(t)0.9 after the coupling is turned on indicating a high degree of phase cohesiveness.

FIG. 1.

For a collection of N neurons from Eq. (1), g0 is set to 0mS/cm2 for t<100. Starting at t=100, coupling is turned on taking g0=0.35mS/cm2. Colored traces in panel (a) show the transmembrane voltages of a representative sample of neurons and the black line shows the average voltage. A transition from incoherence to synchronization is clearly observed after the coupling is turned on. Panels (b) and (c) show the associated values of the time-delayed average value of the synaptic variable and the order parameter computed according to Eqs. (3) and (4), respectively.

FIG. 1.

For a collection of N neurons from Eq. (1), g0 is set to 0mS/cm2 for t<100. Starting at t=100, coupling is turned on taking g0=0.35mS/cm2. Colored traces in panel (a) show the transmembrane voltages of a representative sample of neurons and the black line shows the average voltage. A transition from incoherence to synchronization is clearly observed after the coupling is turned on. Panels (b) and (c) show the associated values of the time-delayed average value of the synaptic variable and the order parameter computed according to Eqs. (3) and (4), respectively.

Close modal

Due to the complexity and high-dimensionality of Eq. (1), phase-amplitude-based model order reduction techniques will be used for further analysis. To this end, consider a general dynamical system of the form

x˙=F(x,p0,u(t)),
(5)

where xRN is the system’s state, F sets the dynamics, p0RM is a collection of nominal parameters, and u(t)R is an external input. Suppose that Eq. (5) admits a stable, T-periodic limit cycle xγ when u(t)=0.

The timing of oscillations occurring in Eq. (5) can be analyzed in a reduced order setting using the well-established method of phase reduction.11,38,74 When using this strategy, the phase θ[0,2π) is defined on the limit cycle and scaled so that dθ/dt=2π/T=ω when u(t)=0. The phase can be extended to include the basin of attraction of the limit cycle using the notion of isochrons,15,74 which are defined as follows: for a given p0 and taking u(t)=0, for any initial condition a(0)xγ, the isochron corresponding to a(0) is defined to be the set of all b(0) such that

limt||a(t)b(t)||=0,
(6)

where b(t) gives the solution of (5) with initial condition b(0) and |||| is some vector norm. Intuitively, initial conditions on the same isochron have the same asymptotic behavior. Defining the phase θ(x) using isochrons, one can arrive at a reduced order model by restricting attention to a close neighborhood of the periodic orbit,

θ˙=ω+z(θ)u(t),
(7)

where z(θ)θx,Fu is known as the phase response curve, where each partial derivative is evaluated at xγ(θ) and u=0. Note that in the transformation to (7), Taylor expansion is used to obtain x˙=F(x,p0,0)+Fuu(t)+O(u2) as an intermediate step. Phase reduction is a tremendously powerful tool that can be used to analyze oscillations of the N-dimensional system (5) using a one-dimensional reduction. However, it is only valid in the weak perturbation limit, i.e., requiring the underlying assumption that u(t)=O(ϵ) where 0<ϵ1.

Phase-only models of the form (7) typically fail when large magnitude inputs are required. Such inputs drive the state of the system far from the vicinity of the periodic orbit to states for which the local linearizations of the gradient of the phase are no longer accurate. To combat this issue, phase-amplitude based coordinate systems can be employed to capture the dynamics in directions transverse to the periodic orbit. A variety of such coordinate systems have been proposed and investigated in recent years.8,14,23,48,55,61,65 In this work, the notion of isostable coordinates will be used, which represent level sets of the slowest decaying eigenfunctions of the Koopman operator.30 As described in Ref. 69 (cf., Ref. 73) one can define the slowest decaying isostable coordinates in the basin of attraction of the limit cycle by exploiting Floquet theory. By first defining Δx=xxγ(θ), one can write Δx˙=J(θ)Δx+O(||Δx||2), where J(θ) denotes the Jacobian of the vector field evaluated at xγ(θ). Letting Υ be the fundamental matrix associated with this linear equation [i.e., that solves Δx(T)=ΥΔx(0)], define the left eigenvectors, right eigenvectors, and eigenvalues of Υ as wj, vj, and λj, respectively. Also, define κj=log(λj)/T as the Floquet exponent associated with λj. These Floquet exponents will be sorted so that κN=0 (corresponding to the λN=1 eigenvalue of the fundamental matrix) and that |Re(κj)||Re(κj+1)| for all others. Provided λ1 is not a defective eigenvalue,66 the ψ1 isostable coordinate can be defined in the basin of attraction of the periodic orbit as follows:

ψ1(x)=limk=[w1T(χ(tΓk,x)x0Γ)exp(κ1tΓk)],
(8)

where χ(t,x) represents the flow of (5) when u(t)=0, x0Γ is the intersection of the periodic orbit and the θ=0 isochron, and tΓ gives the time of the kth transversal of the θ=0 isochron. Intuitively, isostable coordinates give a sense of the temporal distance from the periodic orbit, with larger isostable coordinates encoding for states that will take longer to relax to the periodic orbit. Isostable coordinates associated with more rapidly decaying Floquet exponents can be defined implicitly as level sets of Koopman eigenmodes. The interested reader is directed to Refs. 35, 70, and 73 for a more detailed discussion of the isostable coordinate paradigm and related reduced order modeling strategies.

This work will employ the adaptive phase-amplitude reduction approach to consider the aggregate dynamical behavior coupled neural oscillators using an analytically tractable reduced order coordinate basis. Following the adaptive reduction strategy illustrated in Refs. 67 and 68, Eq. (5) can be considered in terms of the behavior of a shadow system

x˙=F(x,p,0)+Ue(x,p,p0,u),
(9)

where

Ue(x,p,p0,u)=F(x,p0,u)F(x,p,0).
(10)

One can verify that Eq. (9) is mathematically equivalent to Eq. (5). Here, F(x,p,0) is taken to be the nominal dynamical behavior and Ue is an effective additive input. It will be assumed that for some nominal range of system parameters p, periodic solutions xpγ exists and are continuously differentiable with respect to both θ and p. In reference to each periodic orbit, a generalized phase, θ(x,p) and a set of slowly decaying generalized isostable coordinates ψ1(x,p),,ψβ(x,p) can be considered. Assuming that θ(x,p) and each ψj(x,p) are continuously differentiable in a neighborhood of the limit cycle for all x and p, one can transform to phase and isostable coordinates via the chain rule

dθdt=θxdxdt+θpdpdt,dψjdt=ψjxdxdt+ψjpdpdt
(11)

for j=1,,β. In the above transformation, the nominal parameter set is allowed to change as a function of time. A detailed description of strategies for the computation of the partial derivatives from (11) are presented in Ref. 68. Evaluation of these terms yields the adaptive reduction

θ˙=ω(p)+Z(θ,p)Ue(x,p,p0,u)+D(θ,p)p˙,ψ˙j=κj(p)ψj+Ij(θ,p)Ue(x,p,p0,u)+Qj(θ,p)p˙,j=1,,β,p˙=Gp(p,θ,ψ1,,ψβ).
(12)

Above, ω(p) is the natural frequency of the unperturbed periodic orbit xpγ, Z(θ,p) is the gradient of the phase coordinate evaluated on xpγ, D(θ,p)=θp and captures how changes to the adaptive parameter set influence the generalized phase, κj(p) is the Floquet exponent associated with the jth isostable coordinate, Ij(θ,p) is the gradient of the isostable coordinate ψj with respect to the state evaluated on xpγ, Qj(θ,p)=ψjp and captures how changes to the adaptive parameter set influence the isostable coordinates, and Gp is a parameter update rule that will be discussed in further detail momentarily. In the transformation from (9) to (12), the most rapidly decaying isostable coordinates (i.e., with large magnitude, negative Floquet exponents) can be assumed to be zero and neglected from the model dynamics, thereby providing a reduction in dimensionality.

As mentioned earlier, standard phase-based reduction strategies implicitly require that the state remains close to the underlying periodic orbit. The critical advantage of the adaptive reduction is its ability to continuously update the adaptive parameter set p in order keep the isostable (amplitude) coordinates small, thereby keeping the state close to the periodic orbit xpγ. To accomplish this, Gp must be chosen appropriately. Some general design heuristics for Gp are proposed in Ref. 68. For instance, when only a single non-truncated isostable coordinate is considered and pR1, it often works well in practice to choose

Gp(p,θ,ψ1)=νψ1Q1(θ,p),
(13)

taking ν>0. For higher dimensional reductions, Gp must be designed on an ad hoc basis. Provided an appropriate Gp can be found so that each ψj remains an O(ϵ) term at all times, one can write the state as

x=xpγ(θ)+O(ϵ).
(14)

Substituting Eq. (14) into (12) yields a closed reduced order equation that no longer requires direct knowledge of the state x.

Here, using the adaptive phase-amplitude reduction framework in conjunction with formal averaging techniques, we illustrate that the general model of form (1) with an externally applied current of the form (2) can be accurately represented according to a low-dimensional set of autonomous ordinary differential equations. Ultimately, this reduced order model will be used to investigate the influence of fζ and fμ from (2) on the efficacy of the control law.

In the absence of coupling, input, and noise, we assume that each neuron from (1) has a strongly attracting periodic orbit with unperturbed period Tn so that the each neuron can be represented as a limit cycle oscillator. Using the phase reduction from (7), the equations describing (1) become

θ˙in=ωn+z(θin)[Istim(t)+2Dηi(t)j=1Ng0Nj=1N[s(θjn2πτ/Tn)(V(θin)Esyn)]]
(15)

for i=1,,N. Here, θjn is the phase of the jth neuron. Because the periodic orbit is strongly attracting, functions V(θin) and s(θin) are used to represent the transmembrane voltage and synaptic variable associated with the ith neuron as a function of the phase.22 Additionally, ωn=2π/Tn. As part of the transformation to phase coordinates, it is assumed that D is small enough so that terms proportional to the square of the noise intensity can be truncated26 (cf., Ref. 2). In the formulation above, g0 and Istim are assumed to be small enough so that si(tτ) is well approximated by s(θi2πτ/Tn) for all i. Note that in this work, we use the deterministic definition of phase from (8), which is defined for each neural oscillator with respect to the stable limit cycle in the absence of noise and other inputs. We emphasize that alternative formulations of phase are possible (see, for instance, Refs. 54 and 58) that directly incorporate the effects of noise.

Provided that the population of neurons is large enough, Eq. (15) can be represented according to a probability density ρ(θn,t) that evolves according to the Fokker–Planck equation using an Ito formulation,13 

ρt=θn[(ωn+z(θn)[Istim(t)g0s¯(V(θn)Esyn)])ρ(θn,t)]+2θn2[Dz2(θn)ρ(θn,t)]=ωnρθ+2θn2[Dz2(θn)ρ]θn[z(θn)g0s¯(V(θn)Esyn)ρ]InternalDynamics[z(θn)ρθ+zθρ]Istim(t)ExternalInput,
(16)

where

s¯=02πs(θn2πτ/Tn)dθn,
(17)

and ρθ and zθ denote the partial derivatives taken with respect to θn. Because θnS1, the boundary conditions of this partial differential equation (PDE) are periodic. Equation (17) is analogous to Eq. (3) and is valid when considering probability distributions instead of a finite population of neurons. For all simulations of (16), we will take τ=1.68. This value is chosen so that the coupling yields phase cohesive solutions in the absence of input. Toward a formulation using the adaptive reduction strategy described in Sec. II C, we distinguish between the internal dynamics, and the external input, separating terms that depend on Istim(t). The PDE model (16) will be further analyzed using a finite difference approximation with 100 total grid points for the θ axis. Letting xθnR100 represent the discretized solution of the phase distribution, after discretization the dynamics of Eq. (16) can be approximated according to

x˙θn=Fθn(xθn,g0)Istim(t)[diag(zd)A+diag(Azd)]xθn,
(18)

where zdR100 is a discretized version of z(θn), Fθn represents the resulting dynamics that are solely a function of the state and coupling strength, A is a matrix which implements the finite difference approximation of the derivative of ρ(θn,t) with respect to θn, and diag() takes a vector as its argument and returns a diagonal matrix of compatible size.

Equation (18) is in the same general form as Eq. (5) [with Istim(t) taking the place of u(t)], allowing for the implementation of the adaptive phase-amplitude reduction strategy described in Sec. II C. Here, we will take the synaptic coupling strength to be the adaptive parameter, allowing Eq. (18) to be analyzed in terms of the shadow system

x˙θn=Fθn(xθn,g)+Ue(t,g,g0,xθn),Ue(xθn,g,g0,Istim(t))=Fθn(xθn,g0)Istim(t)[diag(zd)A+diag(Azd)]xθnFθn(xθn,g),
(19)

where the coupling strength g[0.225,0.455] is taken to be the adaptive parameter. For each value of g considered, the model has a stable periodic orbit xgγ with period ranging from 9.47 to 13.57 ms. Here, the variable θ will be used to denote the population phase that describes the timing of the population-level oscillations. Note that θ, the phase of the population oscillation, is distinct from θn, the phase of an individual oscillator with respect to its unperturbed limit cycle. Each periodic orbit, xgγ(θ), will be aligned so that θ=0 corresponds to the moment that s¯ reaches is maximum value on the limit cycle. Implementing the adaptive reduction strategy discussed in Sec. II C and considering the equations governing the applied stimulus from (2), the dynamics of Eq. (19) can be represented according to

θ˙=ω(g)+Z(θ,g)Ue+D(θ,g)g˙,ψ1˙=κ(g)ψ1+I1(θ,g)Ue+Q1(θ,g)g˙,g˙=νQ1(θ,p)ψ1,ζ˙=ϵfζ(θ,g,μ),μ˙=ϵfμ(θ,g,μ),
(20)

with Istim=μfs(t+ζTs/2π) and ν=0.1. Above, the structure of fζ and fμ has been rewritten to depend on the reduced order model parameters θ, g, and μ rather than on time itself. Additionally, it is assumed that all but one of the isostable coordinates decay rapidly and can be ignored. Per Eq. (14), the associated probability distribution can be approximated in terms of the reduced order model parameters according to

ρ(θn,g)=xgγ(θ).
(21)

Note that the parameter update rule for g˙ from Eq. (20) matches the general strategy from (13) that serves to drive ψ1 to smaller magnitude values. Figure 2 shows various characteristics of the reduced order model from (20). Panel (a) shows ρ(θn,g) plotted at θ=0 for various values of g on the appropriate limit cycles. As g decreases, the coherence between the oscillators decreases, resulting in a less synchronous population oscillation. Panel (b) shows the magnitudes of the principal Floquet multiplier (blue line) as well as secondary and tertiary complex-conjugate Floquet multipliers. The relatively fast decay of the complex-conjugate eigenvalues allows model (19) to be represented using a single isostable coordinate. Taking g0=0.35mS/cm2, in response to a the periodic input Istim(t)=1.2sin(2πt/5), panel (c) shows the evolution of the adaptive parameter over time. Panel (d) compares two sample probability distributions associated with the reduced order Eq. (20) and full order Eq. (18) with solid and dashed lines, respectively, at two separate times. Times associated with larger (respectively, smaller) values of g are associated with more (respectively less) phase cohesive oscillations. Panel (e) shows that the isostable coordinate ψ1 does indeed remain small over time as required by the adaptive reduction framework, and panel (f) shows the magnitude of the Kuramoto order parameter,

r(t)=|0πeiθnρ(θn,t)dθn|,
(22)

computed both from the output of the reduced order model (black line) and the full model simulations (dashed line). Compared to Eq. (4), Eq. (22) is valid in the limit that N tends toward infinity. Overall, we observe good agreement between the full order and reduced order simulations.

FIG. 2.

Illustration of relevant terms from the reduced order model (20) and a comparison with full order simulations of (18). Panel (a) shows ρ(θn,g) associated with xgγ evaluated at θ=0 for various values of g. Plots of the magnitudes of the Floquet multipliers associated with the slowest decaying Floquet eigenmodes are shown in panel (b). Panels (c)–(f) show simulation results when a periodic input of Istim(t)=1.2sin(2πt/5) is applied to both the full and reduced order models. Panel (c) shows the evolution of the adaptive parameter over time with the dashed line highlighting the nominal value of g0. For these simulations, panel (d) shows two snapshots of the phase distribution for the output from the reduced order model (solid lines) and the full order model (dashed lines). Panel (f) shows the time course of the Kuramoto order parameter for the full and reduced order models. As shown in panel (e), the value of the isostable coordinate remains small during the course of the simulation, as required by the adaptive reduction strategy.

FIG. 2.

Illustration of relevant terms from the reduced order model (20) and a comparison with full order simulations of (18). Panel (a) shows ρ(θn,g) associated with xgγ evaluated at θ=0 for various values of g. Plots of the magnitudes of the Floquet multipliers associated with the slowest decaying Floquet eigenmodes are shown in panel (b). Panels (c)–(f) show simulation results when a periodic input of Istim(t)=1.2sin(2πt/5) is applied to both the full and reduced order models. Panel (c) shows the evolution of the adaptive parameter over time with the dashed line highlighting the nominal value of g0. For these simulations, panel (d) shows two snapshots of the phase distribution for the output from the reduced order model (solid lines) and the full order model (dashed lines). Panel (f) shows the time course of the Kuramoto order parameter for the full and reduced order models. As shown in panel (e), the value of the isostable coordinate remains small during the course of the simulation, as required by the adaptive reduction strategy.

Close modal

In Sec. III, we illustrated that evolution of the probability distribution of phases within a large population of coupled neural oscillators can be represented in terms of a low order set of equations of form (20) by using the adaptive phase-amplitude reduction paradigm. By recognizing that Ue from Eq. (20) is periodic in t, we illustrate that further reduction in dimensionality can be realized using formal averaging techniques.16,51

To begin, recalling from Eq. (2) that Ts is the nominal period of the input fs, we will work in a rotating reference frame defining

ϕθΩtζ,
(23)

where Ω2π/Ts. Note that we have incorporated ζ in the definition of our rotating reference frame, which explicitly accounts for the slowly varying phase offset of the applied input.

Working in this rotating reference frame, Eq. (20) becomes

ϕ˙=ω(g)Ω+Z(ϕ+Ωt+ζ,g)Ue+D(ϕ+Ωt+ζ,g)g˙ϵfζ(ϕ+Ωt+ζ,g,μ),ψ˙1=κ(g)ψ1+I1(ϕ+Ωt+ζ,g)Ue+Q1(ϕ+Ωt+ζ,g)g˙,g˙=νQ1(ϕ+Ωt+ζ,g)ψ1,ζ˙=ϵfζ(ϕ+Ωt+ζ,g,μ),μ˙=ϵfμ(ϕ+Ωt+ζ,g,μ).
(24)

Recall from Eq. (19) that Ue(xθn,g,g0,Istim(t)) is periodic in time with period Ts. We will assume that Ue, ψ1, and ω(g)Ω are O(ϵ) terms so that Eq. (24) can be written in the general form x˙=ϵFR(x,t) where x=[ϕψ1gζμ]T, and FR gives the dynamics and is periodic in t. For such a system, the method of averaging16,51 can be used to approximate the behavior of (24) according to

ϕ˙=ω(g)Ω+ϵA1(ϕ,g,μ)νψ1A2(ϕ,g)ϵA3(ϕ,g,μ),ψ˙1=(κ(g)νA5(ϕ,g))ψ1+ϵA4(ϕ,g,μ),g˙=νψ1A6(ϕ,g),ζ˙=ϵA3(ϕ,g,μ),μ˙=ϵA7(ϕ,g,μ),
(25)

where

A1(ϕ,g,μ)=1ϵTs0Ts[Z(ϕ+Ωt+ζ,g)Ue(xgγ(ϕ+Ωt+ζ),g,g0,μfs(t+ζ/Ω))]dt,A2(ϕ,g)=1Ts0Ts[D(ϕ+Ωt+ζ,g)Q1(ϕ+Ωt+ζ,g)]dt,A3(ϕ,g,μ)=1Ts0Ts[fζ(ϕ+Ωt+ζ,g,μ)]dt,A4(ϕ,g,μ)=1ϵTs0Ts[I1(ϕ+Ωt+ζ,g)Ue(xgγ(ϕ+Ωt+ζ),g,g0,μfs(t+ζ/Ω))]dt,A5(ϕ,g)=1Ts0Ts[Q12(ϕ+Ωt+ζ,g)]dt,A6(ϕ,g)=1Ts0Ts[Q1(ϕ+Ωt+ζ,g)]dt,A7(ϕ,g,μ)=1Ts0Ts[fμ(ϕ+Ωt+ζ,g,μ)]dt.
(26)

Using the averaging framework, Eq. (25) provides a close approximation to the unaveraged system of equations from (24). Note that the terms A1 through A7 do not explicitly depend on ζ, which shifts the periodic terms of each integrand but does not influence the result when integrating over the entire period. Also, A1 through A7 are assumed to be O(1) terms; even though A1 and A4 are multiplied by 1/ϵ , the term Ue in the integrand is an O(ϵ) term.

Further simplification of Eq. (25) is possible provided ν is chosen large enough so that ψ1 rapidly converges to its steady state value. To this end, define

ψss(ϕ,g,μ)ϵA4(ϕ,g,μ)κ(g)νA5(ϕ,g)
(27)

as the steady state value of ψ1 that results when ϕ,g,ζ, and μ are held fixed. Recognizing that the right hand sides of Eq. (25) are comprised of O(ϵ) terms, ϕ˙,g˙,μ˙, and ζ˙ are also O(ϵ) terms. Taking the time derivative of ψss one can show that

dψssdt=O(ϵ2).
(28)

Next, defining Δψ1ψ1ψss and taking the time derivative, one finds

Δψ˙1=(κ(g)νA5(ϕ,g))ψ1+ϵA4(ϕ,g,μ)O(ϵ2)=(κ(g)νA5(ϕ,g))(ψ1ψss+ψss)+ϵA4(ϕ,g,μ)O(ϵ2)=(κ(g)νA5(ϕ,g))(Δψ1)+O(ϵ2).
(29)

Because all periodic orbits used in the adaptive reduction are stable, κ(g)<0. Likewise, A5(ϕ,g)>0 since the integrand from (26) is non-negative and ν>0 by definition. Considering that κ(g)νA5(ϕ,g)=O(1) provided that Δψ1=O(ϵ2) at t=0, Δψ1=O(ϵ2) uniformly bounded in time and, consequently, ψ1=ψss can be used as an approximation that is valid up to O(ϵ2). This allows for the elimination of the ψ1 dynamics from Eq. (25) to yield

ϕ˙=ω(g)Ω+ϵA1(ϕ,g,μ)νψssA2(ϕ,g)ϵA3(ϕ,g,μ)+O(ϵ2),g˙=νψssA6(ϕ,g)+O(ϵ2),ζ˙=ϵA3(ϕ,g,μ),μ˙=ϵA7(ϕ,g,μ).
(30)

Finally, provided |νA5(ϕ,g)||κ(g)|, one can write

ψss=ϵ(A4(ϕ,g,μ)νA5(ϕ,g)+O(κ(g)ν2)).
(31)

Substituting (31) into Eq. (30) finally yields

ϕ˙=H1(ϕ,g,μ)ϵA3(ϕ,g,μ)+O(ϵ2)+O(ϵκ(g)ν),g˙=H2(ϕ,g,μ)+O(ϵ2)+O(ϵκ(g)ν),ζ˙=ϵA3(ϕ,g,μ),μ˙=ϵA7(ϕ,g,μ),
(32)

where

H1(ϕ,g,μ)=ω(g)Ω+ϵA1(ϕ,g,μ)ϵA4(ϕ,g,μ)A2(ϕ,g)A5(ϕ,g),H2(ϕ,g,μ)=ϵA4(ϕ,g,μ)A6(ϕ,g)A5(ϕ,g).
(33)

Equation (32) will be used to investigate the response of the neural oscillator population to the near-periodic input described by Eq. (2). In the averaged setting, the phase offset and the magnitude of the input can be modulated slowly through the functions A3 and A7, respectively. Strategies for designing these inputs will be considered momentarily. Note that in situations where ζ˙=μ˙=0, i.e., when purely periodic inputs are applied, only two state variables are required to represent the perturbed dynamics. We emphasize that in the reduction from Eq. (25) to Eq. (32), the term ν disappears provided that |νA5(ϕ,g)| is sufficiently larger than |κ(g)| for all choices of ϕ and g.

In Fig. 3, the temporal evolution of ϕ is compared for simulations of the averaged model equations from (32) and the unaveraged model equations from (20) when using two different periodic sinusoidal inputs. In this example, we take ζ=ζ˙=μ˙=0 so that the input is fully periodic. Note here that for the input Istim(t)=1.2sin(2πt/5), when computing the averaged equation Ω is taken to be 2π/10 in order to keep the magnitude of ω(g)Ω small. After computing the averaged functions A1A7 using (26), ϕ˙ (respectively, g˙) computed according to Eq. (32) is shown in panels (a) and (d) [resp., (b) and (e)] for the two different stimuli. Overall, the averaged equations from (32) accurately represent the unaveraged dynamics from Eq. (20) with results of individual simulations shown in panels (c) and (f). Nullclines are plotted with red and green dashed lines to highlight the salient dynamical behavior.

FIG. 3.

For the input Istim(t)=1.2sin(2πt/5), panels (a) and (b) show ϕ˙ and g˙, respectively, computed according to the averaged equations from (32). Panel (c) shows traces of the averaged equation (blue line) and the unaveraged equations (black line). A nullcline of g˙ associated with the averaged equations is plotted in red. The magnitude of g, and, hence, the level of coherence between oscillators, grows and shrinks depending on whether the state is above or below the g˙=0 nullcline. There are no nullclines associated with ϕ˙ for the example shown in panel (c). Panels (d)–(f) show similar information when using Istim(t)=0.3sin(2πt/11). For this input, the system stably entrains to Istim(t); this entrained solution corresponds to a stable fixed point of the averaged equations from (32).

FIG. 3.

For the input Istim(t)=1.2sin(2πt/5), panels (a) and (b) show ϕ˙ and g˙, respectively, computed according to the averaged equations from (32). Panel (c) shows traces of the averaged equation (blue line) and the unaveraged equations (black line). A nullcline of g˙ associated with the averaged equations is plotted in red. The magnitude of g, and, hence, the level of coherence between oscillators, grows and shrinks depending on whether the state is above or below the g˙=0 nullcline. There are no nullclines associated with ϕ˙ for the example shown in panel (c). Panels (d)–(f) show similar information when using Istim(t)=0.3sin(2πt/11). For this input, the system stably entrains to Istim(t); this entrained solution corresponds to a stable fixed point of the averaged equations from (32).

Close modal

Results in Figs. 2 and 3 illustrate that both the reduced order equations obtained using the adaptive reduction strategy (20) and the averaged equations from (32) provide a good approximation to the dynamics of (16). In Sec. IV, we will use these reduced order equations to develop control strategies for manipulating the population-level oscillations with the goal of disrupting synchronization.

The results from Fig. 3 correspond to a situation where fζ and fμ from Eq. (2) are zero, that is, where the applied input is purely periodic. As we will illustrate here, by slowly changing the magnitude of the input and the relative offset of the rotating reference frame, μ and ζ, respectively, the cohesion between oscillators from (1) can be effectively manipulated.

Our goal is to identify functions fζ and fμ from Eq. (20) that can shift the state of the adaptive parameter in the reduced order equations from (20) from g=g0 to some g=gtarg<g0. By shifting g to smaller values, the resulting phase distribution will be less synchronized. We emphasize that in the implementation of the control algorithm, the actual strength of the synaptic coupling does not change. Rather, the goal is to apply input using Istim(t) that drives the system to less synchronized states that are near the periodic orbit associated with the parameter gtarg.

Intuitively, toward the identification of an appropriate control strategy, first consider the g˙=0 nullcline from panel (c) of Fig. 3 obtained for the averaged equations (32). Referring to this nullcline by g=fn(ϕ), for any gtarg in the range of fn(ϕ), the control objective could be satisfied provided that ϕ could be set to some value ϕtarg for which gtarg=fn(ϕtarg). For instance, considering the g˙=0 nullcline from panel (c) of Fig. 3, if a control strategy can be found that sets ϕ to a constant value of 5 rad, g would tend toward 0.28. Likewise, considering the nullclines from panel (f) of Fig. 3, by actively controlling ϕ to values between 4.7 and 5.6 rad, g can be driven to any desired value less then g0. In this case, however, the magnitude of the input, μ would need to be decreased as g is driven near its lower limit. With this in mind, we will consider a control strategy that seeks to drive g and ϕ to gtarg and ϕtarg, respectively. This control strategy will be implemented by mandating the following update rules for ζ and μ from Eq. (20):

ϵfζ(θ)=K1(θΩtζϕtarg),ϵfμ(g,μ)=K2(ggtarg)+K30t(ggtarg)dτK4μ,
(34)

where K1K4 are appropriately chosen, positive constants. Here, in the averaged reference frame, i.e., using (32), this choice of fζ and fμ yields the relationships

ϕ˙=H1(ϕ,g,μ)K1(ϕϕtarg),g˙=H2(ϕ,g,μ),ζ˙=K1(ϕϕtarg),μ˙=K2(ggtarg)+aIK4μ,a˙I=K3(ggtarg),
(35)

where ai is an extra variable used to represent the necessary terms from the integral control. Intuitively in the above equations, the term K1(ϕϕtarg) acts as a proportional controller that serves to update the offset of the rotating reference frame, ζ, to manipulate ϕ. The term K2(ggtarg)+aI serves as a proportional–integral controller that increases or decreases the magnitude of the applied input depending on whether g is above or below its target value. The additional term, K4μ, is added to influence stability of the controller. Of course, the efficacy of the controller (34) depends on an adequate choice for ϕtarg and gtarg. These considerations will be discussed in the results to follow.

Implementation of the control strategy proposed in Eq. (35) requires knowledge of the functions H1(ϕ,g,μ) and H2(ϕ,g,μ). If an underlying model of form (20) is known, these equations can be identified by first directly evaluating the integrals from Eq. (26) and then computing the necessary terms from Eq. (33). In the absence of an underlying dynamical model, the required functions H1(ϕ,g,μ) and H2(ϕ,g,μ) can still be estimated in real-time from observable data using a strategy described below.

Consider a general system of form (5) with a set of reduced order, averaged equations that take the form (32). As was the case for the reduction of the noisy oscillator population from (15), assume that θ=0 can be clearly identified from measured data [for instance, θ=0 for a given periodic orbit from Eq. (16) corresponds to the moment that s¯ reaches a local maximum]. Consider two successive traversals of the θ=0 level set, occurring at t1 and t2. The associated coordinates in the rotating reference frame defined by Eq. (23) are

ϕ(t1)=mod(Ωt1ζ(t1),2π),ϕ(t2)=mod(Ωt2ζ(t2),2π).
(36)

Toward identifying a strategy to estimate the unknown functions H1 and H2 from observable data, one can rewrite Eq. (32) as

ϕ˙=H1(ϕ(t2),g(t2),μ(t2))ϵA3(ϕ,g,μ)+O(ϵ2)+O(ϵκ(g)ν),g˙=H2(ϕ(t2),g(t2),μ(t2))+O(ϵ2)+O(ϵκ(g)ν),ζ˙=ϵA3(ϕ,g,μ),μ˙=ϵA7(ϕ,g,μ).
(37)

Above, the arguments of H1 and H2 are evaluated at the fixed time t2. Because ϕ˙, g˙, μ˙, H1, and H2 are O(ϵ) terms, fixing time in the evaluation of H1 and H2 only contributes an additional O(ϵ2) term when t[t1,t2]. Neglecting the O(ϵ2) and O(ϵκ(g)/ν) terms, through direct integration of Eq. (37), one can write

H1(ϕ(t2),g(t2),μ(t2))ϕ(t2)ϕ(t1)+ζ(t2)ζ(t1)t2t1,H2(ϕ(t2),g(t2),μ(t2))g(t2)g(t1)t2t1.
(38)

Equation (38) can be used to obtain successive, real-time estimates of H1 and H2, for instance, when implementing the control strategy suggested by Eq. (35), ζ(t) is known exactly at all times because it is part of the controller, ϕ(t1) and ϕ(t2) can be computed according to (23) at successive crossings of the θ=0 level set, and g(t2) and g(t1) can be inferred from output data. Using this strategy, a reduced order model does not need to be computed before implementing the control strategy described by Eq. (35). Rather, the necessary terms of the reduced order equations can be computed in real-time as they are needed.

As a matter of practical implementation, H1 and H2 can be updated once per cycle and are assumed to be constant between updates. In order to mitigate small errors that would otherwise accumulate over time, the reduced order variables ϕ and g can also be updated once per cycle to reflect the measurements from data. The approximations from Eq. (38) require the right hand side of Eq. (32) to be small. As such, the accuracy of these approximations will degrade as larger magnitude inputs are used.

The model independent control strategy suggested in Sec. IV B relies on the estimation of the terms from Eq. (38) from observables and subsequent implementation of the controller (35). A step-by-step procedure for accomplishing this task is provided below.

  • (Step 1)

    Identify an observable y(t)R1 to be used for the implementation of the control strategy. This observable should have clearly identifiable oscillations. Define θ=0 to correspond to a distinct feature of the oscillations that occurs once per cycle, for instance, the crossing of a Poincaré section or the attainment of a local maximum.

  • (Step 2)

    Defining t1 and t2 to be the times of subsequent crossings of the θ=0 isochron, define a map from the observation of y(t) on the interval [t1,t2] to the variable g(t2) that encodes for the oscillation amplitude. For instance, one could take g(t2)=max(y(t))min(y(t)) on the interval t=[t1,t2].

  • (Step 3)

    Choose a periodic stimulation Istim(t) to be applied to the system. The period of Istim(t) must be close to the period of oscillation of the observable chosen in step 1.

  • (Step 4)

    Choose a set of constants K1,K2,K3,K4, and ϕtarg for the controller from Eq. (35). In a model independent setting, this will generally be a trial and error process. One can start with ϕtarg and K1 to identify choices of ϕtarg that effectively decreases the magnitude of oscillations. The terms K2, K3, and K4 can subsequently be tuned to appropriately modulate the stimulus magnitude.

  • (Step 5)

    Choose a profile for gtarg(t), which sets the target for the oscillation amplitude.

  • (Step 6)

    Implement the controller from Eq. (35). On a cycle-by-cycle basis, estimates of H1 and H2 can be updated according to Eq. (38), estimates of ϕ(t1) and ϕ(t2) can be computed according to (23) at successive crossings of the θ=0 level set, and g(t2) and g(t1) can be inferred from the mapping defined in Step 2.

  • (Step 7)

    Adjust constants K1,K2,K3,K4, and ϕtarg as necessary. Performance is generally less sensitive to the choice of these constants when gtarg is close to the nominal value of g that results in the absence of Istim. As gtarg is decreased (i.e., as the oscillation amplitude decreases) parameter adjustments may be necessary to yield a viable controller.

Here, we present results for the implementation of the control strategy from Sec. IV applied to both the Fokker–Planck model for the temporal evolution of the probability distribution of a large population of neurons from Eq. (16) and for a full model of N=1000 coupled neurons from Eq. (1). As described in Sec. III B, the Fokker–Planck model can be represented in a reduced order form using the adaptive reduction from Eq. (20). Consequently, the terms H1(ϕ,g,μ) and H2(ϕ,g,μ) can be computed directly from the terms of the reduced order equations using the averaging framework. For the full model of N=1000 coupled neurons, direct computation of H1 and H2 is not possible and the real-time estimation strategy from Sec. IV B will be used.

Here, we consider results of the control strategy from Eq. (35) applied to modify the phase cohesion of the Fokker–Planck model from Eq. (16). Starting with Eq. (16), the terms of the adaptive reduction from (20) are first computed taking g0=0.35mS/cm2. Recall that for the allowable range of coupling strengths g[0.225,0.455], the associated periodic orbits have periods T[9.47,13.57] ms. For this application, we consider the nominal input to be a periodic series of pulses,

Istim(t+ζTs/2π)={μif mod(t+ζTs/2π,Ts)<6and mod(t+ζTs/2π,1)<0.5,0otherwise.
(39)

Here, Ts=10 ms and the input is a series of six pulses shown taking μ=1 in panel (b) of Fig. 4. Using this input, the terms of the averaged, reduced order equation of the form (32) are determined by directly evaluating the integrals from Eq. (26) and using them to compute the relationships from Eq. (33). Panel (a) of Fig. 4 shows level sets of H2(ϕ,g,μ)=0 shown at different values of μ. Relative to the baseline at g0=0.35mS/cm2, g˙ tends to decrease when ϕ[4.2,6.2] and tends to increase for ϕ[0.4,4]. With this in mind, recalling that our goal is to decrease the phase cohesion by driving the adaptive parameter, g, to lower values, to implement the control strategy from Eq. (35), we choose ϕtarg=5 rad. Three different parameter sets C1,C2, and C3 are chosen to implement the control strategy taking {K1,K2,K3,K4}={4,35,0,0}, {4,35,1,0}, and {4,35,1,0.2}, respectively. Parameter set C1 uses only two proportional controllers to determine the update rules for ζ and μ. Parameter set C2 adds an integral controller to the update rule for μ, and parameter set C3 adds an additional term that is proportional to μ. The results to follow are qualitatively similar when modifying the non-zero values in these parameter sets.

FIG. 4.

Illustration of the control strategy applied to the reduced order, averaged representation of Fokker–Planck equation from (35) for the periodic series of pulses shown in panel (b). Panel (a) shows level sets of H2(ϕ,g,μ)=0 for various values of μ. For ϕ2 (respectively, ϕ5), the value of g tends to increase (respectively, decrease), resulting in an increase (respectively, decrease) in phase cohesiveness. Taking ϕtarg=5 rad, fixed points of Eq. (40) are obtained for different choices of gtarg and three different parameter sets. The largest real eigenvalue, λc, associated with the linearization about these fixed points is plotted against gtarg and shown in panel (c). Fixed points for which Real(λc)>0 are unstable, indicating that the controller is not viable. Panel (d) shows the performance of each controller taking gtarg(t)=0.24+0.11exp(0.01t). The parameter set C3, which uses nonzero values for all control parameters, is able to decrease the phase cohesiveness of the population, with associated probability distributions shown at t=0 and t=600 computed according to Eq. (21) at θ=π. The associated order parameter computed according to Eq. (22) is also indicated.

FIG. 4.

Illustration of the control strategy applied to the reduced order, averaged representation of Fokker–Planck equation from (35) for the periodic series of pulses shown in panel (b). Panel (a) shows level sets of H2(ϕ,g,μ)=0 for various values of μ. For ϕ2 (respectively, ϕ5), the value of g tends to increase (respectively, decrease), resulting in an increase (respectively, decrease) in phase cohesiveness. Taking ϕtarg=5 rad, fixed points of Eq. (40) are obtained for different choices of gtarg and three different parameter sets. The largest real eigenvalue, λc, associated with the linearization about these fixed points is plotted against gtarg and shown in panel (c). Fixed points for which Real(λc)>0 are unstable, indicating that the controller is not viable. Panel (d) shows the performance of each controller taking gtarg(t)=0.24+0.11exp(0.01t). The parameter set C3, which uses nonzero values for all control parameters, is able to decrease the phase cohesiveness of the population, with associated probability distributions shown at t=0 and t=600 computed according to Eq. (21) at θ=π. The associated order parameter computed according to Eq. (22) is also indicated.

Close modal

We investigate the efficacy of these three parameter sets by considering fixed points of

ϕ˙=H1(ϕ,g,μ)K1(ϕϕtarg),g˙=H2(ϕ,g,μ),μ˙=K2(ggtarg)+aIK4μ,a˙I=K3(ggtarg),
(40)

i.e., with the same dynamics as Eq. (35) but neglecting the ζ variable. Here, ζ is ignored to account for the fact that ϕ might not fully reach its target value. Fixed points of Eq. (40) are obtained with a Newton iteration for various choices of gtarg and plotted against eigenvalues with the largest real component, λc, of the linearization about the fixed point. For parameter sets C1 and C2, stability is lost for lower values of gtarg as Real(λc) crosses from negative to positive values. Panel (d) illustrates the control strategy using parameter sets C1,C2, and C3 for simulations of the reduced order equations from (35) taking gtarg(t)=0.24+0.11exp(0.01t). The parameter set C3 is able to follow this target accurately while C1 and C2 display unstable behavior for small values of gtarg. The emergence of this instability is predicted by the eigenvalue plots from panel (c).

Figure 4 considers results for the proposed control strategy applied directly to the reduced order equations from (35). Next, we consider the behavior of the proposed control strategy applied to the Fokker–Planck equation from (16). Once again, we use the pulsatile input specified by Eq. (39) with Ts=10 ms and take ϕtarg=5 rad. Like in the previous example, H1 and H2 from Eq. (35) are obtained by directly evaluating the terms of (33) associated with the averaged, reduced order model dynamics from Eq. (25). Recall that these reduced order model dynamics are approximations to the full order behavior of the Fokker–Planck equation. In order to mitigate the accumulation of small errors over time, at every crossing of θ=0 [corresponding to the moment that s¯ from Eq. (17) achieves a local maximum] both ϕ and μ are inferred and updated based on the measurements of s¯(t). For each of these updates, ϕ is inferred from Eq. (23) and μ is inferred by comparing the value of s¯ to the maximum values of s¯ attained for each periodic orbit xgγ. Results are shown in Fig. 5. Parameter sets C1,C2, and C3 are identical to those from Fig. 4 with the exception of taking K2=20 instead of 35. Parameters ϕ, ζ, and μ are shown over the course of each simulation in panels (a)–(c) and the adaptive parameter g is shown in panel (d). In this example, gtarg(t)=0.35 for t<100 and gtarg(t)=0.232+0.118exp(0.01(t100)) for t100. Results are qualitatively similar to those from Fig. 4 with parameter sets C1 and C2 yielding controllers with unstable behavior.

FIG. 5.

Illustration of the control strategy from Eq. (35) applied to the Fokker–Planck equation from (16). Panels (a), (b), and (c) show the time course of ϕ, ζ, and μ using three different parameter sets for the constants associated with the controller. For each simulation, s¯ as defined by Eq. (17) is used to identify the moment that θ=0, which occurs once per cycle when s¯ achieves a local maximum. This information is used to infer and update both ϕ and μ once per cycle. On balance, these updates are small in magnitude indicating that the averaged, reduced order equations provide a good approximation for the full order model. Panel (d) shows the performance of the controller with gtarg(t) decaying slowly toward 0.232 over the course of the simulation. For the results that use parameter set C3, the solutions of the Fokker–Planck equation are shown at t=100 and t=600. The associated order parameter computed according to Eq. (22) is also indicated. Results are qualitatively similar to those from Fig. 4, with C1 and C2 yielding controllers with unstable behavior.

FIG. 5.

Illustration of the control strategy from Eq. (35) applied to the Fokker–Planck equation from (16). Panels (a), (b), and (c) show the time course of ϕ, ζ, and μ using three different parameter sets for the constants associated with the controller. For each simulation, s¯ as defined by Eq. (17) is used to identify the moment that θ=0, which occurs once per cycle when s¯ achieves a local maximum. This information is used to infer and update both ϕ and μ once per cycle. On balance, these updates are small in magnitude indicating that the averaged, reduced order equations provide a good approximation for the full order model. Panel (d) shows the performance of the controller with gtarg(t) decaying slowly toward 0.232 over the course of the simulation. For the results that use parameter set C3, the solutions of the Fokker–Planck equation are shown at t=100 and t=600. The associated order parameter computed according to Eq. (22) is also indicated. Results are qualitatively similar to those from Fig. 4, with C1 and C2 yielding controllers with unstable behavior.

Close modal

The implementation of the proposed control strategy from the examples in Figs. 4 and 5 is predicated on the existence of a collection of periodic orbits that change continuously with respect to the adaptive parameter. In these examples, when the coupling strength g is taken to be the adaptive parameter, periodic orbits do not exist when taking g<0.225. This represents a challenge when the overall goal is desynchronization, since the Kuramoto order parameter associated with the periodic orbit that emerges for g0.225 is still relatively large. In order to address this limitation, in the example to follow, we will assume that a less phase cohesive set of periodic orbits, xpγ, exists for some adaptive parameter, pR. Once again, these orbits will be aligned so that θ=0 corresponds to the moment that s¯ from Eq. (17) reaches a local maximum. We will assume the existence of some model observable y(t) such that y(t)=p(t). Ultimately, we will employ the real-time estimation strategy described in Sec. IV B to identify the reduced order terms used in the control strategy from Eq. (35). Note that in this example, the adaptive parameter set will be defined implicitly. Because we will be using the model observable to infer the value of the adaptive parameter, we do not need an explicit definition for the adaptive parameter.

To proceed, we will consider simulations of the Fokker–Planck equation with the observable,

sD=maxθ(s¯(xpγ(θ))minθ(s¯(xpγ(θ)).
(41)

Here, this observable is defined to be the difference between the maximum and minimum value of s¯ on the periodic orbit xpγ(θ). This definition allows for the inference from measurements of sD(t) once per cycle. Note that s¯ still needs to be measured continuously in order to evaluate Eq. (41). Using this strategy, the Fokker–Planck equation from (16) is simulated and the controller from Eq. (35) is implemented by estimating H1 and H2 on a cycle-by-cycle basis. In this example, sD takes the place of the adaptive parameter g from (35). For the control parameters, we use ϕtarg=5 and take sD,targ(t)=0.31 for t<100 and sD,targ(t)=0.01+0.30exp(0.005(t100)) for t100. These values of sD,targ are chosen to be close to the nominal value that results in the absence of control for t<100 and to gradually decrease toward lower values (which correspond to less synchronous solutions) over time. Controller parameters K1, K2, K3, and K4 are taken to be 4, 30, 1, and 1, respectively. Qualitatively similar results can be obtained using other values for K1 through K4. Results are shown in Fig. 6. Panels (a)–(c) show ϕ,ζ, and μ over the course of the simulation. These traces are qualitatively similar to those obtained using the C3 parameter set from Fig. 5. The average value of the synaptic variable and the order parameter from Eqs. (17) and (22), respectively, are shown in panels (d) and (e). Both s¯ and r oscillate rapidly relative to the timescale shown giving the appearance of a thick line. Over the course of the simulation, the order parameter is driven below the minimum values observed in the results from Fig. 5. Panel (f) shows the inferred value of sD(t) during the simulation, which gradually decays toward smaller values, as mandated by the chosen value of sD,targ(t). We emphasize that we are able to drive the probability distribution to less synchronous values without performing any a priori computations to obtain a reduced order model.

FIG. 6.

Illustration of the control strategy from Eq. (35) applied to the Fokker–Planck equation from (16). The results from this figure use an implicitly defined adaptive parameter that can be inferred from the observable defined in Eq. (41). By simply tracking s¯, i.e., the average value of the synaptic variable as defined in (17), the controller from Eq. (35) is implemented where the terms H1 and H2 are approximated in real-time. Panels (a), (b), and (c) show ϕ, ζ, and μ over the course of the simulation. Panels (d) and (e) show the corresponding values of s¯ and the order parameter. Starting from about r=0.8 at t=0, the order parameter drops to approximately 0.1 after 1000 ms, indicating substantial desynchronization. As the distribution approaches a less phase coherent state, the input magnitude drops substantially, taking values of around μ=0.06 at t=2000. The inferred value of sD is shown in panel (f) during the simulation. Probability distributions are shown at t=100 and t=1500 giving a visual representation of the resulting desynchronization. The controller implemented here is able to drive the distribution to less synchronized solutions than the controller implemented for the results shown in Fig. 5.

FIG. 6.

Illustration of the control strategy from Eq. (35) applied to the Fokker–Planck equation from (16). The results from this figure use an implicitly defined adaptive parameter that can be inferred from the observable defined in Eq. (41). By simply tracking s¯, i.e., the average value of the synaptic variable as defined in (17), the controller from Eq. (35) is implemented where the terms H1 and H2 are approximated in real-time. Panels (a), (b), and (c) show ϕ, ζ, and μ over the course of the simulation. Panels (d) and (e) show the corresponding values of s¯ and the order parameter. Starting from about r=0.8 at t=0, the order parameter drops to approximately 0.1 after 1000 ms, indicating substantial desynchronization. As the distribution approaches a less phase coherent state, the input magnitude drops substantially, taking values of around μ=0.06 at t=2000. The inferred value of sD is shown in panel (f) during the simulation. Probability distributions are shown at t=100 and t=1500 giving a visual representation of the resulting desynchronization. The controller implemented here is able to drive the distribution to less synchronized solutions than the controller implemented for the results shown in Fig. 5.

Close modal

Next, we consider the results of the proposed control strategy applied directly to the population of N=1000 noisy, conductance-based neurons characterized by Eq. (1) using the same values of coupling and synaptic time delay used in the results from Fig. 1. In this example, the value of the input, Istim is chosen to be identical to Eq. (39) with the exception of taking Ts=10.6 ms, which is closer to the nominal period of 10.9 ms for the coupled oscillations.

Like in the example presented in Fig. 6, we assume that a set of periodic orbits xpγ exists for some adaptive parameter pR. We will use the observable sD as defined in Eq. (41) as a proxy for the adaptive parameter and employ the real-time estimation strategy described in Sec. IV B to identify the reduced order terms H1 and H2 used in the control strategy from Eq. (35). We define θ=0 for each of these orbits to correspond to the moment that sD crosses 0.10 with a positive slope. Once per cycle, when this θ=0, crossing is detected ϕ can be updated using to Eq. (23). Likewise, sD can be updated by computing the difference between the maximum and minimum value of s¯(t) on the previous cycle. In-between cycles, the reduced order state variables can be updated according to the reduced order equations from (35) (with sD taking the place of g).

As a first step in the implementation, the controller from Eq. (35) is applied taking {K1,K2,K3,K4}={4,0,0,0}. This choice of parameters allows for direct control of ϕ (through manipulation of ζ) but does not allow for the control input μ to change. As such, by setting ϕtarg to be a particular value, sD will approach an associated region in phase space for which s˙D=0. Consequently, an approximation for the nullclines of s˙D=0 can be obtained by fixing a particular value of μ and slowly sweeping through different values of ϕ[0,2π). These results are shown in Fig. 7. Taking μ=2, panels (a) and (b) show the value of the order parameter computed according to Eq. (4) and the inferred value of sD, respectively; Qualitatively, similar shapes are seen in both traces. Panel (c) shows the nullclines that emerge for various choices of μ. For μ large enough, setting ϕ5 completely disrupts the synchronous oscillations and no nullclines can be identified. Note that simply setting ϕ=5 and taking μ large enough is not a viable long-term desynchronization strategy—when oscillations are lost no information can be obtained for the reduced order model allowing oscillations to re-emerge in the system due to the nominal synaptic coupling between neurons.

FIG. 7.

When simulating a large population of conductance-based neurons from Eq. (1), the value of ϕ is controlled with μ held constant. By slowly sweeping from ϕ=0 to 2π, level sets of s˙D=0 can be obtained. One such level set is shown in panel (b) using μ=2. The corresponding order parameter is shown in panel (a). Panel (c) shows level sets obtained for various values of μ. Setting ϕ5 when μ is large enough temporarily disrupts synchronization so that no level sets can be obtained.

FIG. 7.

When simulating a large population of conductance-based neurons from Eq. (1), the value of ϕ is controlled with μ held constant. By slowly sweeping from ϕ=0 to 2π, level sets of s˙D=0 can be obtained. One such level set is shown in panel (b) using μ=2. The corresponding order parameter is shown in panel (a). Panel (c) shows level sets obtained for various values of μ. Setting ϕ5 when μ is large enough temporarily disrupts synchronization so that no level sets can be obtained.

Close modal

Notice from panel (a) of Fig. 7 that the most prominent desynchronization (judging by the order parameter) occurs near values of ϕ=5. As such, in a full implementation of the desynchronizing controller from Eq. (35) we choose ϕtarg=5 rad with constants {K1,K2,K3,K4}={4,6,0.18,0.30}. Results are shown in Fig. 8 for a simulation taking sD,targ(t)=0.34 for t<100 and sD,targ(t)=0.08+0.26exp(0.01(t100)) for t100. These values of sD,targ are chosen to be close to the nominal value that results in the absence of control when t<100 and to gradually decrease toward lower values (which correspond to less synchronous solutions) over time. For t<100, μ is set to zero and the controller is turned off. The controller is turned on for t100. The black line in panel (a) of Fig. 8 shows the value of ϕ(t) inferred from simulations with the red line corresponding to the target value. Instantaneous updates to ϕ are made once per cycle when the threshold corresponding to θ=0 is crossed, which explains the appearance of short vertical lines in the black trace. Panel (b) shows the actual value of sD(t) superimposed over sD,targ(t). Panels (c) and (d) show the value of the phase offset and the control amplitude, respectively. When the controller is initially turned on, a relatively large value of the input magnitude, μ, is required to initiate the desynchronization; as sD is driven to smaller values, comparatively less control input is required. Panel (e) shows a representative sample of the of voltage traces from the neurons in the oscillator population (colored lines) with the average value of transmembrane voltage shown in black. Disruption of the nominal synchronization is clearly observed. The average value of the synaptic current and the order parameter are shown in panels (f) and (g), respectively. In the absence of input, r=0.9 indicating a highly synchronized population. Soon after the input is turned on and transient behavior dies out, the order parameter drops to near 0.2 where it remains indefinitely. Qualitatively, similar results to those shown in Fig. 8 can be obtained by choosing different values of K1, K2, K3, and K4.

FIG. 8.

The controller given in Eq. (35) with input given by (39) is implemented to desynchronize the population of N=1000 synaptically coupled neurons described by Eq. (1). Terms H1 and H2 of the controller are estimated using the strategy described in Sec. IV B. Black lines in panels (a) and (b) show values of ϕ(t) and sD(t) over the course of the simulation, respectively. Red lines show the corresponding values of ϕtarg(t) and sD,targ(t), respectively. The phase offset and the magnitude of the input are shown in panels (c) and (d), respectively. Disruption of synchrony can be observed in panel (e), where colored lines show representative voltage traces of individual neurons and the black line shows the average transmembrane voltage of the population. Panel (f) shows the average value of the synaptic variable, taken to be the observable for this system when implementing the proposed strategy. Panel (g) shows the order parameter defined according to Eq. (4), which falls from around 0.9 to 0.2 after the controller is turned on.

FIG. 8.

The controller given in Eq. (35) with input given by (39) is implemented to desynchronize the population of N=1000 synaptically coupled neurons described by Eq. (1). Terms H1 and H2 of the controller are estimated using the strategy described in Sec. IV B. Black lines in panels (a) and (b) show values of ϕ(t) and sD(t) over the course of the simulation, respectively. Red lines show the corresponding values of ϕtarg(t) and sD,targ(t), respectively. The phase offset and the magnitude of the input are shown in panels (c) and (d), respectively. Disruption of synchrony can be observed in panel (e), where colored lines show representative voltage traces of individual neurons and the black line shows the average transmembrane voltage of the population. Panel (f) shows the average value of the synaptic variable, taken to be the observable for this system when implementing the proposed strategy. Panel (g) shows the order parameter defined according to Eq. (4), which falls from around 0.9 to 0.2 after the controller is turned on.

Close modal

Robustness of the control algorithm is also considered by varying model parameters. For the same model used to obtain the results from Fig. 8, representative simulations for different choices of the parameter K1 are shown in panels (a)–(f) of Fig. 9. When changing one parameter at a time and keeping the rest identical to the parameter set used in Fig. 8, the controller is able to track the specified value of sD,targ for K1[3.3,9.0], K2[3.7,19.5], K3[0.01,0.29], K4[0.19,0.39], and ϕtarg[4.6,5.4]. These results provide preliminary evidence that the proposed controller is robust to changes in the parameters; however, a more rigorous study of controller robustness would be warranted in future studies. Additionally, alternative implementations of the proposed control framework are also possible. Consider, for instance, taking {K1,K2,K3,K4}={4,6,0,0} and letting the input magnitude to saturate with limits μ[0.1,5]. Implementing the controller with these parameters does drive the order parameter to lower values, although occasional short-lasting resynchronization sometimes occurs (results are not shown for this alternative parameter set).

FIG. 9.

Simulations shown in Fig. 8 are repeated for different choices of K1. Representative results are shown here in panels (a)–(f) for the indicated values. Black and red lines show the values of sD(t) and sD,targ(t), respectively, for each simulation. A viable controller results for choices of K1[3.3,9.0]. Robustness in response to changes in other controller parameters is reported in the text.

FIG. 9.

Simulations shown in Fig. 8 are repeated for different choices of K1. Representative results are shown here in panels (a)–(f) for the indicated values. Black and red lines show the values of sD(t) and sD,targ(t), respectively, for each simulation. A viable controller results for choices of K1[3.3,9.0]. Robustness in response to changes in other controller parameters is reported in the text.

Close modal

Finally, this model independent control strategy is implemented using a modified stimulus,

Istim(t+ζTs/2π)={μif mod(t+ζTs/2π,Ts)<4and mod(t+ζTs/2π,1)<0.5,μif 4mod(t+ζTs/2π,Ts)<8and mod(t+ζTs/2π,1)<0.5,0otherwise.
(42)

Instead of a series of only positive pulses, Eq. (42) mandates an alternating set of positive and negative pulses. For a constant value of μ, this stimulation protocol would yield a charge-balanced stimulus, which is a practical requirement in clinical settings to mitigate Faradaic reactions that damage both the surrounding brain tissue and the DBS probe.21,33 For a nonstatic choice of μ, the input will be nearly charge balanced. Note that Eq. (42) does not describe a standard biphasic pulse,6 but nonetheless will have a substantially smaller accumulation of charge than the monophasic pulses from Eq. (39). Results are shown in Fig. 10 using the pulses mandated by Eq. (42) and taking ϕtarg=0.5 rad with all other parameters taken identical to those from Fig. 8. Once again, the controller is turned on at t=100 ms. Panels (a)–(c) show ϕ, sD, and Istim(t) over the course of a representative simulation. Panels (d)–(f) show voltage traces, s¯, and the order parameter, respectively, over the first 400 ms of simulation. Despite the use of a different input, the results from Fig. 10 are qualitatively similar to those from Fig. 8.

FIG. 10.

The controller from Eq. (35) using a nearly charge-balanced stimulation mandated by Eq. (42) is implemented to desynchronize the population of N=1000 synaptically coupled neurons described by Eq. (1). The controller is turned on at t=100 ms. Black lines in panels (a) and (b) show values of ϕ(t) and sD(t) over the course of the simulation, respectively. Red lines show the corresponding values of ϕtarg(t) and sD,targ(t), respectively. Panel (c) shows the applied stimulation immediately after the controller is turned on. Disruption of synchrony can be observed in panel (d), where colored lines show representative voltage traces of individual neurons and the black line shows the average transmembrane voltage of the population. Panel (e) shows the average value of the synaptic variable, taken to be the observable for this system when implementing the proposed strategy. Panel F shows the order parameter during the simulation defined according to Eq. (4).

FIG. 10.

The controller from Eq. (35) using a nearly charge-balanced stimulation mandated by Eq. (42) is implemented to desynchronize the population of N=1000 synaptically coupled neurons described by Eq. (1). The controller is turned on at t=100 ms. Black lines in panels (a) and (b) show values of ϕ(t) and sD(t) over the course of the simulation, respectively. Red lines show the corresponding values of ϕtarg(t) and sD,targ(t), respectively. Panel (c) shows the applied stimulation immediately after the controller is turned on. Disruption of synchrony can be observed in panel (d), where colored lines show representative voltage traces of individual neurons and the black line shows the average transmembrane voltage of the population. Panel (e) shows the average value of the synaptic variable, taken to be the observable for this system when implementing the proposed strategy. Panel F shows the order parameter during the simulation defined according to Eq. (4).

Close modal

In this work, we develop a closed-loop control framework that can be used to manipulate the behavior of large populations of coupled neurons using near-periodic inputs, i.e., inputs with the general form given in Eq. (2) that are nominally periodic but with the incorporation of a slowly varying phase offset and a slowly changing amplitude. This work can be viewed as potential theoretical underpinning for the phase-specific aDBS application strategies that have gained interest in recent years;7,18,49 here, the slowly varying phase offset can be used to coordinate the timing of the DBS stimulus relative to some measured observable. In this work, our primary control objective is to engender desynchronization; however, other control objectives could conceivably be considered using this strategy. Additionally, this general framework could potentially be used in other applications, for instance, in conjunction with non-invasive stimulation therapies45 by using accelerometer data from tremor patients as an observable used to define the phase and amplitude of the oscillation.

We employ a series of model transformations and reductions in the analysis performed in this work. Starting with a large population of conductance-based neurons from Eq. (1), we represent the behavior in the large N limit using the Fokker–Planck Eq. (16). The adaptive phase-amplitude reduction paradigm67,68 is used to analyze the temporal evolution of the Fokker–Planck equation in terms of a low-order reduction and formal averaging techniques16,51 are used to ultimately arrive at an analytically tractable reduced order model of the form (32). A relatively simple controller of form (35) is subsequently proposed to manipulate the state variables of the reduced order model. As illustrated in the results, by simply manipulating the phase offset and the magnitude of the periodic input, the variables of the reduced order model can be controlled successfully.

In comparison to other closed-loop DBS strategies, the proposed method is most similar to phase-triggered methods.3,9,19,62,63 Such methods seek to apply a control input at an appropriate phase of oscillation in order to reduce the degree of synchronization in the overall population. In contrast to these other phase-triggered methods, here we use a rotating reference frame [i.e., defined in Eq. (23)] that considers the offset, ϕ, between the population phase θ and the phase of the periodic input. By appropriately manipulating ϕ, the population oscillation can be driven to less synchronous states. Also, we note the differences between the proposed method and delayed-feedback approaches detailed in Refs. 40, 47, and 52, which can generally be implemented by applying a feedback signal Istim(t)=K(y(tτ)y(t)), where τ and K are a time delay and a proportional gain, respectively, and y(t) is some observable. With an appropriate observable that provides a good representation of the population level behavior, it is often possible to find some combination of τ and K that can yield desynchronization. Extensions that use nonlinear delayed-feedback39 or charge-balanced pulses41 are also possible. In contrast to the method presented in this work, delayed-feedback approaches do not require an a priori specification of the applied DBS waveform.

While the terms of the averaged, reduced order model derived in this work can be computed numerically according to the integrals from Eq. (26), they do result after a long series of transformations and reductions. Errors associated with these transformations will compound at each step. Additionally, in general, the underlying model equations can either be unavailable or can be too complicated to numerically compute the terms of the reduction. For this reason, the real-time strategy for estimation of the terms of the reduced order model as described in Sec. IV B would be essential for practical implementation of the proposed control strategy. Indeed, using this strategy, coupled oscillations in the full order model from Eq. (1) can be disrupted without knowledge of the underlying model equations. In the examples considered in this work, we use the average value of the time-delayed synaptic variable as defined in Eq. (3) as a system observable. In experimental applications, the proposed control strategy could potentially be implemented using accelerometer measurements from hand tremors or LFP measurements.

There are many limitations of the proposed control strategy that are not addressed here. For the purposes of this work, parameter sets for the proposed controller are identified by first determining an adequate choice of ϕtarg for which the application of the near-periodic input has a desynchronizing influence. The remaining parameters were obtained using a trial-and-error approach. In future work, it would be of interest to provide a more careful analysis of this control strategy to investigate questions related to controllability and optimality. An additional limitation of the proposed control strategy is that it requires measurable oscillations to be present at all times. As a consequence of this assumption, synchronization cannot be completely abolished. For instance, when sD,targ is set to values below 0.08 in the results shown in Fig. 8, oscillations can occasionally become difficult to identify subsequently degrading the real-time estimates of ϕ, sD, H1, and H2. When this happens, the neural population can transiently resynchronize until accurate estimates for these terms can be reobtained and the controller can regain its efficacy. Future work will investigate strategies to mitigate this issue, for instance, by investigating strategies to stabilize the fully desynchronized state. In the examples presented in this work, it is assumed that all but one of the isostable (amplitude) coordinates in Eq. (20) decay rapidly so that they can be ignored. As illustrated in panel (b) of Fig. 2, this assumption is explicitly validated for the discretized version of the Fokker–Planck equation from Eq. (19) for the model used in this study. Numerical illustrations of the proposed control strategy are performed using a single population of identical neurons (1) that do not contain a realistic description of the overall brain circuit that is relevant to clinical DBS. It is likely that models with more complicated coupling structures and models with multiple interacting subpopulations may require the inclusion of additional amplitude coordinates, thereby complicating the analysis. Investigation of the proposed control strategy in a more realistic model would be warranted. Additionally, numerical illustrations in this work use the average value of the synaptic variable as an observable used to define the phase and amplitude of oscillations. In practical applications, such information is not readily available. This would necessitate the use of other biomarkers, such as beta band LFP power. Additional issues created by both measurement noise and delays between the application of stimulation and the effect on system observables would likely need to be considered in practical applications.

This material is based upon work supported by the National Science Foundation (NSF) (Grant No. CMMI-2140527).

The authors have no conflict of interest to disclose.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

In order to model the ionic currents Iion for the population of neurons described by Eq. (1), a three-dimensional model that replicates the behavior of tonically firing thalamic neurons from Ref. 50 is used. The model equations in the absence of coupling, noise, and external stimulus are

CV˙=Iion(V,h,r),h˙=hhτh,r˙=rrτr,
(A1)

where Iion(V,r,h) is comprised of five ionic currents,

Iion=IL(V)+INa(V,h)+IK(V,h)+IT(V,r)Ib,
(A2)

each given by

IL(V)=gL(VEL),INa(V,h)=gNa(m3)h(VENa),IK(V,h)=gK((0.75(1h))4)(VEK),IT(V,r)=gT(p2)r(VET),Ib=5μA/cm2.
(A3)

Auxiliary functions are given below:

h=1/(1+exp((V+41)/4)),r=1/(1+exp((V+84)/4)),αh=0.128exp((V+46)/18),βh=4/(1+exp((V+23)/5)),τh=1/(αh+βh),τr=(28+exp((V+25)/10.5)),m=1/(1+exp((V+37)/7)),p=1/(1+exp((V+60)/6.2)).
(A4)

Additional constants are C=1μF/cm2,gL=0.05mS/cm2,EL=70mV,gNa=3mS/cm2,ENa=50mV,gK=5mS/cm2,EK=90mV,gT=5mS/cm2,andET=0mV. With these specific parameters, the neuron settles to a tonically firing limit cycle in steady state with a period of 8.40 ms.

1.
I.
Adamchic
,
C.
Hauptmann
,
U. B.
Barnikol
,
N.
Pawelczyk
,
O.
Popovych
,
T. T.
Barnikol
,
A.
Silchenko
,
J.
Volkmann
,
G.
Deuschl
,
W. G.
Meissner
,
M.
Maarouf
,
V.
Sturm
,
H. J.
Freund
, and
P. A.
Tass
, “
Coordinated reset neuromodulation for Parkinson’s disease: Proof-of-concept study
,”
Mov. Disord.
29
(
13
),
1679
1684
(
2014
).
2.
A.
Aminzare
,
P.
Holmes
, and
V.
Srivastava
, “On phase reduction and time period of noisy oscillators,” in 2019 IEEE 58th Conference on Decision and Control (IEEE, 2019), pp. 4717–4722.
3.
R.
Azodi-Avval
and
A.
Gharabaghi
, “
Phase-dependent modulation as a novel approach for therapeutic brain stimulation
,”
Front. Computat. Neurosci.
9
,
26
(
2015
).
4.
A. L.
Benabid
,
S.
Chabardes
,
J.
Mitrofanis
, and
P.
Pollak
, “
Deep brain stimulation of the subthalamic nucleus for the treatment of Parkinson’s disease
,”
Lancet Neurol.
8
(
1
),
67
81
(
2009
).
5.
P.
Brown
, “
Abnormal oscillatory synchronisation in the motor system leads to impaired movement
,”
Curr. Opin. Neurobiol.
17
(
6
),
656
664
(
2007
).
6.
C. R.
Butson
and
C. C.
McIntyre
, “
Differences among implanted pulse generator waveforms cause variations in the neural response to deep brain stimulation
,”
Clin. Neurophysiol.
118
(
8
),
1889
1894
(
2007
).
7.
H.
Cagnan
,
D.
Pedrosa
,
S.
Little
,
A.
Pogosyan
,
B.
Cheeran
,
T.
Aziz
,
A.
Green
,
J.
Fitzgerald
,
T.
Foltynie
,
P.
Limousin
,
L.
Zrinzo
,
M.
Hariz
,
K. J.
Friston
,
T.
Denison
, and
P.
Brown
, “
Stimulating at the right time: Phase-specific deep brain stimulation
,”
Brain
140
(
1
),
132
145
(
2017
).
8.
O.
Castejón
,
A.
Guillamon
, and
G.
Huguet
, “
Phase-amplitude response functions for transient-state stimuli
,”
J. Math. Neurosci.
3
,
13
(
2013
).
9.
B.
Duchet
,
G.
Weerasinghe
,
C.
Bick
, and
R.
Bogacz
, “
Optimizing deep brain stimulation based on isostable amplitude in essential tremor patient models
,”
J. Neural Eng.
18
(
4
),
046023
(
2021
).
10.
B.
Duchet
,
G.
Weerasinghe
,
H.
Cagnan
,
P.
Brown
,
C.
Bick
, and
R.
Bogacz
, “
Phase-dependence of response curves to deep brain stimulation and their relationship: From essential tremor patient data to a Wilson–Cowan model
,”
J. Math. Neurosci.
10
(
1
),
1
39
(
2020
).
11.
G. B.
Ermentrout
and
D. H.
Terman
,
Mathematical Foundations of Neuroscience
(
Springer
,
New York
,
2010
), Vol. 35.
12.
S.
Faramarzi
and
T. I.
Netoff
, “
Closed-loop neuromodulation for clustering neuronal populations
,”
J. Neurophysiol.
125
(
1
),
248
255
(
2021
).
13.
C. W.
Gardiner
,
Handbook of Stochastic Methods: For Physics, Chemistry and the Natural Sciences
(
Springer
,
Berlin
,
2004
).
14.
E.
Gengel
,
E.
Teichmann
,
M.
Rosenblum
, and
A.
Pikovsky
, “
High-order phase reduction for coupled oscillators
,”
J. Phys.: Complexity
2
(
1
),
015005
(
2020
).
15.
J.
Guckenheimer
, “
Isochrons and phaseless sets
,”
J. Math. Biol.
1
(
3
),
259
273
(
1975
).
16.
J.
Guckenheimer
and
P.
Holmes
,
Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields
(
Springer Verlag
,
New York
,
1983
), Vol. 42.
17.
C.
Hammond
,
H.
Bergman
, and
P.
Brown
, “
Pathological synchronization in Parkinson’s disease: Networks, models and treatments
,”
Trends Neurosci.
30
,
357
64
(
2007
).
18.
A. B.
Holt
,
E.
Kormann
,
A.
Gulberti
,
M.
Pötter-Nerger
,
C. G.
McNamara
,
H.
Cagnan
,
M. K.
Baaske
,
S.
Little
,
J. A.
Köppen
,
C.
Buhmann
,
M.
Westphal
,
C.
Gerloff
,
A. K.
Engel
,
P.
Brown
,
W.
Hamel
,
C. K. E.
Moll
, and
A.
Sharott
, “
Phase-dependent suppression of beta oscillations in Parkinson’s disease patients
,”
J. Neurosci.
39
(
6
),
1119
1134
(
2019
).
19.
A. B.
Holt
,
D.
Wilson
,
M.
Shinn
,
J.
Moehlis
, and
T. I.
Netoff
, “
Phasic burst stimulation: A closed-loop approach to tuning deep brain stimulation parameters for Parkinson’s disease
,”
PLoS Comput. Biol.
12
(
7
),
e1005011
(
2016
).
20.
A. A.
Kühn
,
A.
Tsui
,
T.
Aziz
,
N.
Ray
,
C.
Brücke
,
A.
Kupsch
,
G. H.
Schneider
, and
P.
Brown
, “
Pathological synchronisation in the subthalamic nucleus of patients with Parkinson’s disease relates to both bradykinesia and rigidity
,”
Exp. Neurol.
215
(
2
),
380
387
(
2009
).
21.
A. M.
Kuncel
and
W. M.
Grill
, “
Selection of stimulus parameters for deep brain stimulation
,”
Clin. Neurophysiol.
115
(
11
),
2431
2441
(
2004
).
22.
Y.
Kuramoto
,
Chemical Oscillations, Waves, and Turbulence
(
Springer Verlag
,
Berlin
,
1984
).
23.
B.
Letson
and
J. E.
Rubin
, “
LOR for analysis of periodic dynamics: A one-stop shop approach
,”
SIAM J. Appl. Dynam. Syst.
19
(
1
),
58
84
(
2020
).
24.
R.
Levy
,
W. D.
Hutchison
,
A. M.
Lozano
, and
J. O.
Dostrovsky
, “
Synchronized neuronal discharge in the basal ganglia of parkinsonian patients is limited to oscillatory activity
,”
J. Neurosci.
22
(
7
),
2855
2861
(
2002
).
25.
M. C. H.
Li
and
M. J.
Cook
, “
Deep brain stimulation for drug-resistant epilepsy
,”
Epilepsia
59
(
2
),
273
290
(
2018
).
26.
C.
Ly
and
G. B.
Ermentrout
, “
Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli
,”
J. Comput. Neurosci.
26
(
3
),
425
443
(
2009
).
27.
M.
Malekmohammadi
,
J.
Herron
,
A.
Velisar
,
Z.
Blumenfeld
,
M. H.
Trager
,
H. J.
Chizeck
, and
H.
Brontë-Stewart
, “
Kinematic adaptive deep brain stimulation for resting tremor in Parkinson’s disease
,”
Mov. Disord.
31
(
3
),
426
428
(
2016
).
28.
T.
Manos
,
M.
Zeitler
, and
P. A.
Tass
, “
How stimulation frequency and intensity impact on the long-lasting effects of coordinated reset stimulation
,”
PLoS Comput. Biol.
14
(
5
),
e1006113
(
2018
).
29.
T. D.
Matchen
and
J.
Moehlis
, “
Phase model-based neuron stabilization into arbitrary clusters
,”
J. Comput. Neurosci.
44
(
3
),
363
378
(
2018
).
30.
A.
Mauroy
,
I.
Mezić
, and
J.
Moehlis
, “
Isostables, isochrons, and Koopman spectrum for the action–angle representation of stable fixed point dynamics
,”
Physica D
261
,
19
30
(
2013
).
31.
H. S.
Mayberg
,
A. M.
Lozano
,
V.
Voon
,
H. E.
McNeely
,
D.
Seminowicz
,
C.
Hamani
,
J. M.
Schwalb
, and
S. H.
Kennedy
, “
Deep brain stimulation for treatment-resistant depression
,”
Neuron
45
(
5
),
651
660
(
2005
).
32.
A. C.
Meidahl
,
G.
Tinkhauser
,
D. M.
Herz
,
H.
Cagnan
,
J.
Debarros
, and
P.
Brown
, “
Adaptive deep brain stimulation for movement disorders: The long road to clinical therapy
,”
Mov. Disord.
32
(
6
),
810
819
(
2017
).
33.
D.
Merill
,
M.
Bikson
, and
J.
Jefferys
, “
Electrical stimulation of excitable tissue: Design of efficacious and safe protocols
,”
J. Neurosci. Methods
141
(
2
),
171
198
(
2005
).
34.
B.
Monga
and
J.
Moehlis
, “
Phase distribution control of a population of oscillators
,”
Physica D
398
,
115
129
(
2019
).
35.
B.
Monga
,
D.
Wilson
,
T.
Matchen
, and
J.
Moehlis
, “
Phase reduction and phase-based optimal control for biological systems: A tutorial
,”
Biol. Cybernet.
113
(
1–2
),
11
46
(
2019
).
36.
A.
Nabi
,
M.
Mirzadeh
,
F.
Gibou
, and
J.
Moehlis
, “
Minimum energy desynchronizing control for coupled neurons
,”
J. Comput. Neurosci.
34
,
259
271
(
2013
).
37.
J. S.
Perlmutter
and
J. W.
Mink
, “
Deep brain stimulation
,”
Ann. Rev. Neurosci.
29
,
229
257
(
2006
).
38.
B.
Pietras
and
A.
Daffertshofer
, “
Network dynamics of coupled oscillators and phase reduction techniques
,”
Phys. Rep.
819
,
1–105
(
2019
).
39.
O.
Popovych
,
C.
Hauptmann
, and
P.
Tass
, “
Effective desynchronization by nonlinear delayed feedback
,”
Phys. Rev. Lett.
94
(
16
),
164102
(
2005
).
40.
O. V.
Popovych
,
C.
Hauptmann
, and
P. A.
Tass
, “
Control of neuronal synchrony by nonlinear delayed feedback
,”
Biol. Cybernet.
95
(
1
),
69
(
2006
).
41.
O. V.
Popovych
,
B.
Lysyansky
,
M.
Rosenblum
,
A.
Pikovsky
, and
P. A.
Tass
, “
Pulsatile desynchronizing delayed feedback for closed-loop deep brain stimulation
,”
PLoS One
12
(
3
),
e0173363
(
2017
).
42.
O. V.
Popovych
,
B.
Lysyansky
, and
P. A.
Tass
, “
Closed-loop deep brain stimulation by pulsatile delayed feedback with increased gap between pulse phases
,”
Sci. Rep.
7
(
1
),
1
14
(
2017
).
43.
O. V.
Popovych
and
P. A.
Tass
, “
Adaptive delivery of continuous and delayed feedback deep brain stimulation—A computational study
,”
Sci. Rep.
9
(
1
),
1
17
(
2019
).
44.
A.
Priori
,
G.
Foffani
,
L.
Rossi
, and
S.
Marceglia
, “
Adaptive deep brain stimulation (aDBS) controlled by local field potential oscillations
,”
Exp. Neurol.
245
,
77
86
(
2013
).
45.
C.
Reis
,
B. S.
Arruda
,
A.
Pogosyan
,
P.
Brown
, and
H.
Cagnan
, “
Essential tremor amplitude modulation by median nerve stimulation
,”
Sci. Rep.
11
(
1
),
1
10
(
2021
).
46.
M.
Rosa
,
M.
Arlotti
,
G.
Ardolino
,
F.
Cogiamanian
,
S.
Marceglia
,
A.
Di Fonzo
,
F.
Cortese
,
P. M.
Rampini
, and
A.
Priori
, “
Adaptive deep brain stimulation in a freely moving Parkinsonian patient
,”
Mov. Disord.
30
(
7
),
1003
1005
(
2015
).
47.
M.
Rosenblum
and
A.
Pikovsky
, “
Controlling synchronization in an ensemble of globally coupled oscillators
,”
Phys. Rev. Lett.
92
(
11
),
114102
(
2004
).
48.
M.
Rosenblum
and
A.
Pikovsky
, “
Numerical phase reduction beyond the first order approximation
,”
Chaos
29
(
1
),
011105
(
2019
).
49.
B.
Rosin
,
M.
Slovik
,
R.
Mitelman
,
M.
Rivlin-Etzion
,
S. N.
Haber
,
Z.
Israel
,
E.
Vaadia
, and
H.
Bergman
, “
Closed-loop deep brain stimulation is superior in ameliorating parkinsonism
,”
Neuron
72
(
2
),
370
384
(
2011
).
50.
J.
Rubin
and
D.
Terman
, “
High frequency stimulation of the subthalamic nucleus eliminates pathological thalamic rhythmicity in a computational model
,”
J. Comput. Neurosci.
16
,
211
235
(
2004
).
51.
J. A.
Sanders
,
F.
Verhulst
, and
J.
Murdock
,
Averaging Methods in Nonlinear Dynamical Systems
, 2nd ed. (
Springer Verlag
,
New York
,
2007
).
52.
E.
Schöll
,
G.
Hiller
,
P.
Hövel
, and
M. A.
Dahlem
, “
Time-delayed feedback in neurosystems
,”
Philos. Trans. R. Soc. A
367
(
1891
),
1079
1096
(
2009
).
53.
L. E.
Schrock
,
J. W.
Mink
,
D. W.
Woods
,
M.
Porta
,
D.
Servello
,
V.
Visser-Vandewalle
,
P. A.
Silburn
,
T.
Foltynie
,
H. C.
Walker
,
J.
Shahed-Jimenez
,
R.
Savica
,
B. T.
Klassen
,
A. G.
Machado
,
K. D.
Foote
,
J. G.
Zhang
,
W.
Hu
,
L.
Ackermans
,
Y.
Temel
,
Z.
Mari
,
B. K.
Changzi
,
A.
Lozano
,
M.
Auyeung
,
T.
Kaido
,
Y.
Agid
,
M. L.
Welter
,
S. M.
Khandhar
,
A. Y.
Mogilner
,
M. H.
Pourfar
,
B. L.
Walter
,
J. L.
Juncos
,
R. E.
Gross
,
J.
Kuhn
,
J. F.
Leckma
,
J. A.
Neimat
, and
M. S.
Okun
, “
Tourette syndrome deep brain stimulation: A review and updated recommendations
,”
Mov. Disord.
30
(
4
),
448
471
(
2015
).
54.
J. T. C.
Schwabedal
and
A.
Pikovsky
, “
Phase description of stochastic oscillations
,”
Phys. Rev. Lett.
110
(
20
),
204102
(
2013
).
55.
S.
Shirasaka
,
W.
Kurebayashi
, and
H.
Nakao
, “
Phase-amplitude reduction of transient dynamics far from attractors for limit-cycling systems
,”
Chaos
27
(
2
),
023119
(
2017
).
56.
J. V.
Smit
,
M. L. F.
Janssen
,
H.
Schulze
,
A.
Jahanshahi
,
J. J.
Van Overbeeke
,
Y.
Temel
, and
R. J.
Stokroos
, “
Deep brain stimulation in tinnitus: Current and future perspectives
,”
Brain Res.
1608
,
51
65
(
2015
).
57.
P. A.
Tass
, “
A model of desynchronizing deep brain stimulation with a demand-controlled coordinated reset of neural subpopulations
,”
Biol. Cybernet.
89
(
2
),
81
88
(
2003
).
58.
P. J.
Thomas
and
B.
Lindner
, “
Asymptotic phase for stochastic oscillators
,”
Phys. Rev. Lett.
113
(
25
),
254101
(
2014
).
59.
J.
Volkmann
,
J.
Herzog
,
F.
Kopper
, and
G.
Deuschl
, “
Introduction to the programming of deep brain stimulators
,”
Mov. Disord.
17
(
S3
),
S181
S187
(
2002
).
60.
J.
Wang
,
S.
Nebeck
,
A.
Muralidharan
,
M. D.
Johnson
,
J. L.
Vitek
, and
K. B.
Baker
, “
Coordinated reset deep brain stimulation of subthalamic nucleus produces long-lasting, dose-dependent motor improvements in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine non-human primate model of Parkinsonism
,”
Brain Stimul.
9
(
4
),
609
617
(
2016
).
61.
K. C. A.
Wedgwood
,
K. K.
Lin
,
R.
Thul
, and
S.
Coombes
, “
Phase-amplitude descriptions of neural oscillator models
,”
J. Math. Neurosci.
3
(
1
),
2
(
2013
).
62.
G.
Weerasinghe
,
B.
Duchet
,
C.
Bick
, and
R.
Bogacz
, “
Optimal closed-loop deep brain stimulation using multiple independently controlled contacts
,”
PLoS Comput. Biol.
17
(
8
),
e1009281
(
2021
).
63.
G.
Weerasinghe
,
B.
Duchet
,
H.
Cagnan
,
P.
Brown
,
C.
Bick
, and
R.
Bogacz
, “
Predicting the effects of deep brain stimulation using a reduced coupled oscillator model
,”
PLoS Comput. Biol.
15
(
8
),
e1006575
(
2019
).
64.
C.
Wilson
,
B.
Beverlin
, II
, and
T.
Netoff
, “
Chaotic desynchronization as the therapeutic mechanism of deep brain stimulation
,”
Front. Syst. Neurosci.
5
,
50
(
2011
).
65.
D.
Wilson
, “
Phase-amplitude reduction far beyond the weakly perturbed paradigm
,”
Phys. Rev. E
101
(
2
),
022220
(
2020
).
66.
D.
Wilson
, “
Degenerate isostable reduction for fixed-point and limit-cycle attractors with defective linearizations
,”
Phys. Rev. E
103
(
2
),
022211
(
2021
).
67.
D.
Wilson
, “
Optimal control of oscillation timing and entrainment using large magnitude inputs: An adaptive phase-amplitude-coordinate-based approach
,”
SIAM J. Appl. Dynam. Syst.
20
(
4
),
1814
1843
(
2021
).
68.
D.
Wilson
, “
An adaptive phase-amplitude reduction framework without O(ϵ) constraints on inputs
,”
SIAM J. Appl. Dynam. Syst.
21
(
1
),
204
230
(
2022
).
69.
D.
Wilson
and
B.
Ermentrout
, “
Greater accuracy and broadened applicability of phase reduction using isostable coordinates
,”
J. Math. Biol.
76
(
1–2
),
37
66
(
2018
).
70.
D.
Wilson
and
B.
Ermentrout
, “
Augmented phase reduction of (not so) weakly perturbed coupled oscillators
,”
SIAM Rev.
61
(
2
),
277
315
(
2019
).
71.
D.
Wilson
and
J.
Moehlis
, “
Optimal chaotic desynchronization for neural populations
,”
SIAM J. Appl. Dynam. Syst.
13
(
1
),
276
305
(
2014
).
72.
D.
Wilson
and
J.
Moehlis
, “
Clustered desynchronization from high-frequency deep brain stimulation
,”
PLoS Comput. Biol.
11
(
12
),
e1004673
(
2015
).
73.
D.
Wilson
and
J.
Moehlis
, “
Isostable reduction of periodic orbits
,”
Phys. Rev. E
94
(
5
),
052213
(
2016
).
74.
A.
Winfree
,
The Geometry of Biological Time
, 2nd ed. (
Springer Verlag
,
New York
,
2001
).