Rhythmic activities that alternate between coherent and incoherent phases are ubiquitous in chemical, ecological, climate, or neural systems. Despite their importance, general mechanisms for their emergence are little understood. In order to fill this gap, we present a framework for describing the emergence of recurrent synchronization in complex networks with adaptive interactions. This phenomenon is manifested at the macroscopic level by temporal episodes of coherent and incoherent dynamics that alternate recurrently. At the same time, the dynamics of the individual nodes do not change qualitatively. We identify asymmetric adaptation rules and temporal separation between the adaptation and the dynamics of individual nodes as key features for the emergence of recurrent synchronization. Our results suggest that asymmetric adaptation might be a fundamental ingredient for recurrent synchronization phenomena as seen in pattern generators, e.g., in neuronal systems.

A multitude of real-world dynamical networks possess adaptive interactions.1–6 Structural changes in such networks depend on the state of individual nodes. The state of the nodes in turn depends on the connectivity. This results in a feedback loop between the dynamics (function) and the network structure. In neuronal networks, for example, spike-timing-dependent plasticity, a special type of network adaptation,7–13 contributes to learning14 or anti-kindling15 effects. Another adaptation mechanism, structural plasticity, is responsible for a homeostatic regulation of electrical activity in the brain.16 Beyond neural networks, adaptive networks are used in communication systems17 and for modeling complex behavior in social, climate, or ecological systems.5,18,19

In realistic systems, the adaptation rules are likely to be heterogeneous. For example, the adaptation rule in neuronal networks may depend on the distance between interacting neurons.20,21 So far, little is known about the dynamical effects induced by heterogeneous adaptation.22,23 The question arises as to which extent heterogeneous adaptation produces new functionality and, thus, represents an important element for system behavior. In this work, we answer this question by showing that heterogeneous adaptation can be a determining ingredient for producing new collective network function. In particular, we describe how heterogeneous adaptation induces the phenomenon of recurrent synchronization.

Recurrent synchronization, which we study here, is a macroscopic phenomenon in dynamical networks involving the recurrent switching between synchronous behavior, represented, e.g., by phase-locking, and asynchronous behavior, represented, e.g., by frequency clustering. During recurrent synchronization, a macroscopic observable exhibits bursting behavior, whereas the individual nodes are not required to burst at the microscopic level. In our study, we consider the nodes to have simple oscillatory dynamics. Therefore, recurrent synchronization as a macroscopic effect contrasts bursting found in neuronal networks,24–27 where single neurons can have alternating periods of quiescence and fast spiking.

Other studies28–32 have reported the occurrence of a similar phenomenon, called collective bursting, in neuronal networks. An important difference of Refs. 28–32 from our work, however, is that in these studies collective bursting phenomenon is induced and observable on the microscopic level of individual neurons, whereas recurrent synchronization is not manifested by bursting on the microscopic level. Another adaptation-induced switching between phase-locking and periodic oscillation has been observed in the work of Ref. 33, which is related to fold-homoclinic bursting.34 In a two-population excitatory-inhibitory network of quadratic integrate and fire neurons, a dynamical phenomenon was observed showing bursts of high frequency and high amplitude activity at a slow burst rate.35 In contrast to recurrent synchronization considered here, the populations change their individual dynamics from oscillatory to steady state during the episodes of high activity and quiescence. This phenomenon of activity bursting has been also reported in a population of excitable units adaptively coupled to a pool of resources.36 

In a broader sense, the mechanism for recurrent synchronization, which we report here, is based on a recurrent slow dynamics of hidden variables that we relate to adaptive coupling weights between dynamical populations in this work. When adaptation is heterogeneous (asymmetric), the hidden variables lead to re-emergence of episodes of synchronization and desynchronization. Effects such as converse symmetry breaking37,38 or asymmetry-induced synchronization39 have been described only very recently and are about to change our understanding of “good synchronization conditions” for dynamical systems. In this light, recurrent synchronization adds a new layer of dynamical complexity that can be induced by asymmetry and, hence, heterogeneity, in complex dynamical networks.

In our study, we use models with different hierarchical organization. While two neurons with adaptive coupling is a sufficient minimal model, other models show that recurrent synchronization can occur in dynamic models of varying complexity. A rather complex system consists of two populations of Hodgkin–Huxley neurons with different adaptation rules within and between populations.

The adaptation rules are given by spike-timing-dependent plasticity with different adaptation functions. We also propose reduced phenomenological models of two coupled Hodgkin–Huxley neurons as well as phase oscillators equipped with slowly evolving coupling weights. We show that the reduced models are capable of describing the main dynamical features arising in the intra- and inter-population dynamics. While the more complex model is studied numerically, the reduced phenomenological models are analyzed in more detail by methods from geometric singular perturbation, averaging, and bifurcation theories. In this work, we propose a general methodology to study recurrent synchronization in systems with arbitrary complex dynamical nodes and continuous as well as noncontinuous adaptation rules.

Our study reveals collective mechanisms behind the appearance of recurrent synchronization. In particular, we explain the importance of the following ingredients for the emergence of recurrent synchronization: slow adaptation, i.e., the timescale separation between the adaptation and the individual neuronal dynamics; asymmetry of adaptation rules; recurrent (periodic/spiking) dynamics of individual neurons. We hypothesize that asymmetric adaptivity might play a fundamental role in emergence and impairment of neuronal pattern generators.

The dynamics for the network of synaptically coupled Hodgkin–Huxley neurons are given by the following equations:40,41
C V ˙ i = I i g N a m i 3 h i ( V i E N a ) g K n i 4 ( V i E K ) g L ( V i E L ) ( V i E r ) N j = 1 N κ i j s j , m ˙ i = α m ( V i ) ( 1 m i ) β m ( V i ) m i , h ˙ i = α h ( V i ) ( 1 h i ) β h ( V i ) h i , n ˙ i = α n ( V i ) ( 1 n i ) β n ( V i ) n i , s ˙ i = 5 ( 1 s i ) 1 + e ( V i + 3 8 ) s i .
(1)
Here, V i is the membrane potential of the ith neuron, and E N a = 50 mV, E K = 77 mV, and E L = 54.4 mV are reverse potentials of different ion channels. The dynamic variables m i, h i, and n i denote gating variables describing the probability of open ion-channels for sodium activation, sodium deactivation, and potassium, respectively. The membrane capacity is given by C = 1 μ F / cm 2. We set the reverse potential of coupling E r = 20 mV, which corresponds to excitatory neurons. The functions α i and β i are given as
α m ( V ) = 0.1 V + 4 1 e ( 0.1 V 4 ) , β m ( V ) = 4 e ( V 65 18 ) , α h ( V ) = 0.07 e ( V 65 20 ) , β h ( V ) = 1 1 + e ( 0.2 V 3.5 ) , α n ( V ) = 0.01 V + 0.55 1 e ( 0.1 V 5.5 ) , β n ( V ) = 0.125 e ( V 65 80 ) .
The corresponding conductance parameters are set to g N a = 120 mS / cm 2, g K = 36 mS / cm 2, and g L = 0.3 mS / cm 2. The constant input currents are randomly chosen from the intervals [ 4.99 μ A , 5.01 μ A ] and [ 12.99 μ A , 13.01 μ A ] for populations 1 and 2, respectively. Different input currents I i account for different spiking rates of the two populations and different neurons.
The coupling terms include the synaptic activity s j of the presynaptic neuron, multiplied by the coupling weights κ i j, which changes discontinuously due to synaptic plasticity every time the ith or jth neuron exhibits a spike. Whenever a neuron spikes, the coupling weights between the neurons receive an update according to κ i j κ i j + δ W ( Δ T ) ( δ = 0.005), which depends on the timing difference Δ T i j = t i t j between the last spiking events t i and t j of the ith and jth neuron, respectively. We identify a spiking event by checking whether the voltage of the neuron in question exceeds 0 in the given time step. For the two populations, we use different update functions W. For population 1 (coupling weights of all connections with postsynaptic neurons in population 1), the update function is anti-Hebbian42 and of the following form:
W 1 STDP = { A 1 exp ( Δ T i j τ 1 ) ( Δ T i j e 10 τ 1 ) 10 if Δ T i j > 0 A 2 exp ( Δ T i j τ 2 ) ( Δ T i j e 10 τ 2 ) 10 if Δ T i j < 0 0 if Δ T i j = 0.
(2)
The parameters used for Figs. 2 and 6 are A 1 = 1.17, A 2 = 0.4, τ 1 = 0.25, τ 2 = 1.1, and e = exp ( 1 ). For population 2, the update function is symmetric43 and of the following form:
W 2 STDP ( Δ T i j ) = γ ( c p e | Δ T i j | τ p c d e | Δ T i j | τ d + 1 30 ) ,
(3)
with c p = 1.5, c d = 0.53, τ p = 1.8, and τ d = 5. To ensure bounded coupling weights, we restrict them to the interval [ 0 , 1.5 ]. The two plasticity rules exemplify the diversity of adaptation rules known for nervous systems. We note, however, that the dynamical system introduced in this work can be conceived as a toy model possessing complex dynamical features rather than a model of a realistic system.
FIG. 1.

Recurrent synchronization in a dynamical network. Panel a represents schematically a network of coupled nodes with time-dependent connections κ i j ( t ) (grayscale of the edges denotes the strength of the connection). Panel b displays a dynamical behavior of a single node that is periodic and does not exhibit bursting. Panel c shows a bursting behavior of a macroscopic observable; it contains alternating episodes of steady and oscillatory dynamics. Panel d shows how recurrent synchronization can be represented as a periodic orbit traversing through regions of synchronization and desynchronization with respect to hidden variables.

FIG. 1.

Recurrent synchronization in a dynamical network. Panel a represents schematically a network of coupled nodes with time-dependent connections κ i j ( t ) (grayscale of the edges denotes the strength of the connection). Panel b displays a dynamical behavior of a single node that is periodic and does not exhibit bursting. Panel c shows a bursting behavior of a macroscopic observable; it contains alternating episodes of steady and oscillatory dynamics. Panel d shows how recurrent synchronization can be represented as a periodic orbit traversing through regions of synchronization and desynchronization with respect to hidden variables.

Close modal
FIG. 2.

Recurrent synchronization in a network of Hodgkin–Huxley neurons. Panel a illustrates schematically the structure of the two-population network of Hodgkin–Huxley neurons with asymmetric spike-timing-dependent plasticity (shown in panel b). The colors of the network links refer to the adaptation rule implemented within and between the populations. Panel c shows the time series of the order parameter R (blue, left scale) displaying multiple cycles of bursting and phase-locked states. Panel d shows a zoom into the bursting episode. In panel e, the firing density is shown in blue for the whole network, in brown for population 1, and in green for population 2. Panels f and g display exemplary raster plots for the phase-locked and bursting episodes, respectively, with neurons of the first population (brown) and second population (green). For simplicity, we consider an all-to-all coupling structure on which the coupling weights evolve. The initial conditions for the coupling weights and the starting voltage were randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ], respectively. The constant input currents are randomly chosen from the intervals [ 4.99 μ A , 5.01 μ A ] and [ 12.99 μ A , 13.01 μ A ] for populations 1 and 2. All other parameters are provided in Sec. II A.

FIG. 2.

Recurrent synchronization in a network of Hodgkin–Huxley neurons. Panel a illustrates schematically the structure of the two-population network of Hodgkin–Huxley neurons with asymmetric spike-timing-dependent plasticity (shown in panel b). The colors of the network links refer to the adaptation rule implemented within and between the populations. Panel c shows the time series of the order parameter R (blue, left scale) displaying multiple cycles of bursting and phase-locked states. Panel d shows a zoom into the bursting episode. In panel e, the firing density is shown in blue for the whole network, in brown for population 1, and in green for population 2. Panels f and g display exemplary raster plots for the phase-locked and bursting episodes, respectively, with neurons of the first population (brown) and second population (green). For simplicity, we consider an all-to-all coupling structure on which the coupling weights evolve. The initial conditions for the coupling weights and the starting voltage were randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ], respectively. The constant input currents are randomly chosen from the intervals [ 4.99 μ A , 5.01 μ A ] and [ 12.99 μ A , 13.01 μ A ] for populations 1 and 2. All other parameters are provided in Sec. II A.

Close modal
As each population in the system described above is likely to synchronize, we also consider a simpler paradigmatic model for the dynamics of the synchronous population: the phase oscillator.44,45 Taking into account the adaptation, the model of two adaptively coupled phase oscillators reads46,47
ϕ ˙ 1 = ω 1 κ 1 sin ( ϕ 1 ϕ 2 + α ) ,
(4)
ϕ ˙ 2 = ω 2 κ 2 sin ( ϕ 2 ϕ 1 + α ) ,
(5)
κ ˙ 1 = ϵ ( κ 1 a sin ( ϕ 1 ϕ 2 ) ) ,
(6)
κ ˙ 2 = ϵ ( κ 2 b sin ( ϕ 2 ϕ 1 + β ) ) ,
(7)
where ϕ i [ 0 , 2 π ) represents the phases of the oscillatory dynamics for each population. When the phase increases from 0 to 2 π, this corresponds to one oscillation period. The natural frequencies of the ith oscillator are denoted by ω i. The adaptive coupling weights are given by κ i. The phase lag parameter α is included that may account for a small information transmission delay.48 The parameters a and b scale the influence of the phase dynamics on the coupling weights. An additional asymmetry in the adaption rules for κ 1 and κ 2 is introduced by the parameter β.
To measure the collective motion of the whole network, we introduce an observable that provides an average of the individual node dynamics. The behavior of each dynamical node x i ( t ) can be mapped to a motion on the unit circle represented by a phase variable ϕ i ( t ).44 Particularly, for Hodgkin–Huxley neurons considered in Sec. II A, each node possesses a sequence of spiking events { t i , 1 , t i , 2 , }. The mapping used in this study is given by
ϕ i ( t ) = 2 π t t i , k t i , k + 1 t i , k
for t i , k t < t i , k + 1. The collective observable is then defined by the Kuramoto order parameter
R ( t ) = 1 N | j = 1 N exp ( i ϕ j ( t ) ) | .
(8)
If, for a certain time t, the phases are incoherently spread over the interval [ 0 , 2 π ), the order parameter is zero while it is unity if all phases are the same. Furthermore, by looking at the temporal behavior of R ( t ), we are able to distinguish between phase-locking, which corresponds to an order parameter approximately constant in time, i.e., R ( t ) const, and the loss of phase relation that corresponds to an order parameter oscillating between incoherence (small values of R) and coherence R 1.

For the network of Hodgkin–Huxley neurons, we consider the firing density as a second macroscopic observable. The firing density is determined by dividing the time period of observation into time bins of width = 3000 ms and counting the spikes for each time bin. Afterward, the count per time bin is divided by the number of neurons in the group.

In this section, we introduce the phenomenon of recurrent synchronization, where certain macroscopic observables display bursting behavior while the individual nodes emit single periodic spikes. We show that such a behavior is caused by alternating episodes of synchronization and desynchronization in a dynamical network.

The reported phenomenon can be described in general terms, omitting nonessential details of specific model implementation. The corresponding Fig. 1 summarizes the main phenomenology. We distinguish between the microscopic (individual) and the macroscopic (collective) scales. While the individual nodes show periodic oscillatory behavior [Fig. 1(b)], the collective motion exhibits an alternation between time intervals of high activity, i.e., fast changes of the collective observable, and time intervals of low activity [Fig. 1(c)], i.e., comparatively slow changes of the collective observable.

The mechanism for the recurrent intervals of synchronization and desynchronization can be explained by additional, hidden, slow variables [Fig. 1(d)]. Such hidden variables are not necessarily observable, but they play the key role as the internal control variables that determine the synchronization level. As a result of a subtle interplay between the nodal dynamics and the hidden variables, alternating transitions between synchronized (phase-locked state) and desynchronized episodes (bursting state) can be induced [Fig. 1(d)]. As described below, the slow hidden variables naturally arise in networks with slow network adaptivity.

The recurrent synchronization phenomenon, which we report here, differs from those observed in Refs. 28–32,34,49,50, where single neurons possess alternating periods of spiking and rest. In our case, the nodes exhibit periodic behavior (tonic spiking) at all times, which makes the observed phenomenon even more surprising.

Models of interacting populations are well-known paradigms for studying dynamics in many real-world systems on the mesoscopic scale.51,52 These models have particular importance in neuroscience for the modeling of interacting brain regions or other functional units53 and are also used in social sciences with autonomous agents interacting with each other based on their population affiliation.54,55

To show recurrent synchronization in a complex dynamical network setup, we implement two populations of Hodgkin–Huxley neurons with spike-timing-dependent plasticity (STDP) (Fig. 2), where the two populations differ in the adaptation functions of the coupling weights and in the input current, as described in Sec. II A and shown in Fig. 2(a).

Figures 2(c) and 2(e) show the temporal behavior of two macroscopic variables: the order parameter R ( t ) that measures the level of synchronization and the firing density measuring the mean level of activity of the whole system and the individual populations, respectively. The occurrence of several transitions between a nearly constant (phase-locking episode) to a strongly oscillating order parameter (bursting episode) is clearly visible. Figure 2(d) presents a zoom into a bursting episode, showing many oscillations in a small time interval of 1 s. Figure 2(e) also displays the firing densities for two different populations showing strong differences for the episodes of a fast oscillating order parameter. Figures 2(f) and 2(g) depict raster plots for the two dynamical episodes. It is clear that the collective bursting phenomenon in the order parameter is not due to the bursting behavior of individual neurons that show consistent tonic spiking at any time. The phenomenon is caused by frequency desynchronization between the two populations. The latter can be seen in the different inter-spike intervals of the populations. These results are also robust to noisy inputs, which we model with random α-spikes, and higher heterogeneity in the input currents as well as heterogeneity in the update functions of the coupling weights, see Sec. III B for details.

Interestingly, a sharp drop in spiking activity coincides with both the onset and the termination of the bursting phase.

To investigate the robustness of recurrent synchronization in two populations of Hodgkin–Huxley neurons, we first simulate a network of N = 200 neurons where every neuron receives an independent input in the voltage dynamics. We choose an α-train input of the form56 
I i input ( t ) = I ( V r V i ( t ) ) τ i , k < t α ( t τ i , k ) e α ( t τ i , k ) ,
(9)
modeling random post-synaptic potentials arriving at the ith neuron with the time intervals between two potentials being distributed as N ( 14 ms , 4 ms ). The time evolution of the order parameter for I = 0.01 can be seen in Fig. 3. Recurrent synchronization is still present, but the oscillations in the phase-locked periods have larger amplitudes than in the noiseless case shown in Fig. 2(b). Furthermore, the results shown in Fig. 2(c) are also present for increased heterogeneity in the input current each neuron receives, which is shown in Fig. 4. Here, we show the order parameter of the network for the last 300 s of a 600 s long simulation for non-homogeneous input currents uniformly drawn from the intervals [ 4.5 μ A , 5.5 μ A ] and [ 12.5 μ A , 13.5 μ A ], for population 1 and 2, respectively. Recurrent synchronization is still present in this more heterogeneous system and for the longer time interval.
FIG. 3.

Recurrent synchronization in a network of Hodgkin–Huxley neurons with an independent random input. Time evolution of the order parameter for 200 Hodgkin–Huxley neurons with noisy input from an α-train defined in Eq. (9). Initial conditions for the coupling weights and voltages are randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ], respectively. The input currents are 5 and 13 μ A for populations 1 and 2 and the noise amplitude for the α-train is I = 0.01. The remaining parameters are as described in Sec. II A.

FIG. 3.

Recurrent synchronization in a network of Hodgkin–Huxley neurons with an independent random input. Time evolution of the order parameter for 200 Hodgkin–Huxley neurons with noisy input from an α-train defined in Eq. (9). Initial conditions for the coupling weights and voltages are randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ], respectively. The input currents are 5 and 13 μ A for populations 1 and 2 and the noise amplitude for the α-train is I = 0.01. The remaining parameters are as described in Sec. II A.

Close modal
FIG. 4.

Recurrent synchronization in a network of Hodgkin–Huxley neurons with heterogeneous constant currents. Time evolution of the order parameter for 200 Hodgkin–Huxley neurons after transient. The input currents for populations 1 and 2 are chosen from the intervals [ 4.5 μ A , 5.5 μ A ] and [ 12.5 μ A , 13.5 μ A ], respectively. The initial conditions for the coupling weights and the starting voltage were randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ].

FIG. 4.

Recurrent synchronization in a network of Hodgkin–Huxley neurons with heterogeneous constant currents. Time evolution of the order parameter for 200 Hodgkin–Huxley neurons after transient. The input currents for populations 1 and 2 are chosen from the intervals [ 4.5 μ A , 5.5 μ A ] and [ 12.5 μ A , 13.5 μ A ], respectively. The initial conditions for the coupling weights and the starting voltage were randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ].

Close modal

Recurrent synchronization is likewise existent for heterogeneity in the update functions of the coupling weights. In Fig. 5, the order parameter for a network of 200 neurons with distributed update functions is shown. Recurrent synchronization is again observable even for the case of distributed input currents, update functions, and initial conditions, demonstrating the overall robustness of recurrent synchronization in heterogeneous setups.

FIG. 5.

Recurrent synchronization in a network of Hodgkin–Huxley neurons with heterogeneous coupling weights update functions. Time evolution of the order parameter for 200 Hodgkin–Huxley neurons with τ 1 from W 1 distributed on the interval [ 0.24 , 0.26 ] and γ from W 2 distributed on the interval [ 0.95 , 1.05 ]. Initial conditions for the coupling weights and voltages are randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ], respectively. The input currents for population 1 and 2 are chosen from the intervals [ 4.99 μ A , 5.01 μ A ] and [ 12.99 μ A , 13.01 μ A ], respectively. The remaining parameters are as described in Sec. II A.

FIG. 5.

Recurrent synchronization in a network of Hodgkin–Huxley neurons with heterogeneous coupling weights update functions. Time evolution of the order parameter for 200 Hodgkin–Huxley neurons with τ 1 from W 1 distributed on the interval [ 0.24 , 0.26 ] and γ from W 2 distributed on the interval [ 0.95 , 1.05 ]. Initial conditions for the coupling weights and voltages are randomly chosen from the intervals [ 0 , 0.5 ] and [ 70 mV , 20 mV ], respectively. The input currents for population 1 and 2 are chosen from the intervals [ 4.99 μ A , 5.01 μ A ] and [ 12.99 μ A , 13.01 μ A ], respectively. The remaining parameters are as described in Sec. II A.

Close modal
FIG. 6.

Recurrent synchronization in a system of two Hodgkin–Huxley neurons with asymmetric synaptic plasticity. Panel a shows a time series of the order parameter R, which exhibits recurrent synchronization. Panel b and c provide zooms into the time series of the order parameter for asynchronous spiking and at the transition from synchronous to asynchronous spiking, respectively. In panel d, the phase portrait of the reduced system in coupling variables ( κ 1 , κ 2 ) is shown. Black lines show sample trajectories (flow lines) for the reduced system. The green curve is the projection of the whole system’s trajectory onto the ( κ 1 , κ 2 )-plane. The red line marks the boundary between the synchronous (light pink) and asynchronous (light blue) regimes. Panel e shows three different asymptotic states identified numerically in the parameter plane ( τ 1 , γ ) with trajectories starting in ( κ 1 , κ 2 ) = ( 0.15 , 1 ). Asynchronous spiking is marked in blue, recurrent synchronization is shown in yellow and phase-locked spiking is depicted in red. The white areas indicate states that could not be categorized numerically. All parameters used for the simulation are given in Sec. II A.

FIG. 6.

Recurrent synchronization in a system of two Hodgkin–Huxley neurons with asymmetric synaptic plasticity. Panel a shows a time series of the order parameter R, which exhibits recurrent synchronization. Panel b and c provide zooms into the time series of the order parameter for asynchronous spiking and at the transition from synchronous to asynchronous spiking, respectively. In panel d, the phase portrait of the reduced system in coupling variables ( κ 1 , κ 2 ) is shown. Black lines show sample trajectories (flow lines) for the reduced system. The green curve is the projection of the whole system’s trajectory onto the ( κ 1 , κ 2 )-plane. The red line marks the boundary between the synchronous (light pink) and asynchronous (light blue) regimes. Panel e shows three different asymptotic states identified numerically in the parameter plane ( τ 1 , γ ) with trajectories starting in ( κ 1 , κ 2 ) = ( 0.15 , 1 ). Asynchronous spiking is marked in blue, recurrent synchronization is shown in yellow and phase-locked spiking is depicted in red. The white areas indicate states that could not be categorized numerically. All parameters used for the simulation are given in Sec. II A.

Close modal
The simulation in Fig. 2 indicates that the dynamics within the populations is rather coherent while it can exhibit incoherence between the populations. In order to deepen the understanding of the emergence of recurrent synchronization, we consider the reduced model of the following general form:
x ˙ 1 = F 1 ( x 1 , x 2 , κ 1 ) ,
(10)
x ˙ 2 = F 2 ( x 2 , x 1 , κ 2 ) ,
(11)
κ ˙ 1 = ϵ W 1 ( x 1 , x 2 , κ 1 ) ,
(12)
κ ˙ 2 = ϵ W 2 ( x 2 , x 1 , κ 2 ) ,
(13)
where the mean field variables x 1 and x 2 describe the coherent dynamics within the populations 1 and 2, respectively. The variable coupling weights κ 1 and κ 2 are the coupling weights representing the mean inter-population connections. We consider a small parameter 0 < ϵ 1, which enables the separation of time scales between the fast dynamics of the nodes and the slow dynamics of the coupling weights. The adaptation dynamics is assumed to be on the same slow timescale 1 / ϵ. The functions F 1 , 2 govern the dynamical behavior of the corresponding populations and W 1 , 2 determine the adaptation rules of the coupling weights.

Using the explicit splitting of the time scales (smallness of ϵ), we develop a further reduction in the mean-field equations (10)–(13). In particular, as the coupling weights ( κ 1 , κ 2 ) evolve slowly, we consider them as parameters for the dynamics of the oscillators. Depending on the coupling weights, the coupled oscillator system x ˙ 1 = F 1 ( x 1 , x 2 , κ 1 ), x ˙ 2 = F 2 ( x 2 , x 1 , κ 2 ) shows either synchronized or desynchronized motion.

Denoting by the temporal averaging on the time scale that is much longer compared to the fast changes of the variables x 1 , 2 ( t ) but shorter than the timescale 1 / ϵ of the adaptation, we obtain W 1 ( x 1 , x 2 , κ 1 ) = W ¯ 1 ( κ 1 , κ 2 ), W 2 ( x 2 , x 1 , κ 2 ) = W ¯ 2 ( κ 1 , κ 2 ), and κ ˙ 1 , 2 κ ˙ 1 , 2. This leads to the following two-dimensional reduced model:
κ 1 = W ¯ 1 ( κ 1 , κ 2 ) , κ 2 = W ¯ 2 ( κ 1 , κ 2 ) ,
(14)
describing the dynamics of the coupling weights κ 1 and κ 2 on the slow time scale. In system (14), we have additionally rescaled time t s = ϵ t and, thus, removed the factors ϵ. The prime in κ 1 and κ 2 denotes the derivative with respect to the new slow time t s. The coupling weights κ 1 and κ 2 play the role of hidden variables governing the transitions between synchronization and desynchronization as shown schematically in Fig. 1(d). System (14) allows for a detailed bifurcation analysis of the mechanism behind the emergence of recurrent synchronization. Clearly, for recurrent synchronization, the dynamics of the hidden variables must exhibit recurrent motions. As shown in the following, such a recurrent motion can be observed when the adaptation functions between the two dynamical populations are non-symmetric.

As the system (14) describes the (hidden) dynamics with respect to the slow time, it does not involve the small parameter ϵ and, thus, does not involve this time scale separation and can be efficiently computed.

In Secs. IV and V, we derive the models for slowly evolving coupling weights for two specific examples: two adaptively coupled phase oscillators and the more complex case of two coupled Hodgkin-Huxley neurons equipped with spike timing-dependent plasticity. The latter application shows, moreover, the generality of our approach with regards to complex local dynamics and adaptation rules.

From the mathematical point of view, the reduction is based on geometric singular perturbation theory57,58 and averaging59 for synchronized and desynchronized regimes, respectively. Both theoretical approaches are implemented analytically as well as semi-analytically.

The averaging of the local dynamics is easily made for the case when the dynamics of the subsystems (10) and (11) (with fixed κ 1 and κ 2) converge to an equilibrium ( x 1 ( κ 1 , κ 2 ) , x 2 ( κ 1 , κ 2 ) ). The procedure is known as “adiabatic elimination,” and subsystems (10) and (11) as the “layer equation” or “fast subsystem.”57 If the fast subsystem has one or more equilibria, then they satisfy the following equations:
0 = F 1 ( x 1 ( κ 1 , κ 2 ) , x 2 ( κ 1 , κ 2 ) , κ 1 ) , 0 = F 2 ( x 2 ( κ 1 , κ 2 ) , x 1 ( κ 1 , κ 2 ) , κ 2 ) ,
(15)
where the union of all solutions ( x 1 , x 2 ) compose the so-called critical manifold. If an equilibrium ( x 1 ( κ 1 , κ 2 ) , x 2 ( κ 1 , κ 2 ) ) is stable with respect to the fast subsystem (10) and (11), then the solutions will rapidly converge to this equilibrium, and the reduced system (14) can be simply obtained by substituting this equilibrium point into (12) and (13),
κ 1 = W ¯ 1 ( κ 1 , κ 2 ) = W 1 ( x 1 ( κ 1 , κ 2 ) , x 2 ( κ 1 , κ 2 ) , κ 1 ) , κ 2 = W ¯ 2 ( κ 1 , κ 2 ) = W 2 ( x 1 ( κ 1 , κ 2 ) , x 2 ( κ 1 , κ 2 ) , κ 2 ) .
(16)
If the fast subsystem (10) and (11) does not converge to a stable equilibrium, the adiabatic elimination cannot be applied. In such a case, direct averaging must be used. If, for fixed ( κ 1 , κ 2 ), the solution to (10) and (11) converges to a periodic state x 1 ( t ; κ 1 , κ 2 ), x 2 ( t ; κ 1 , κ 2 ) with a period T ( κ 1 , κ 2 ), then this averaging procedure leads to
κ 1 = 1 T 0 T W 1 ( x 1 ( t ) , x 2 ( t ) , κ 1 ) d t , κ 2 = 1 T 0 T W 2 ( x 2 ( t ) , x 2 ( t ) , κ 2 ) d t .
(17)

When the dynamical populations in Fig. 2 are synchronized, the collective dynamics of each population can be approximated by a single Hodgkin–Huxley neuron. As a result, the dynamical network can be reduced to two coupled Hodgkin–Huxley neurons with asymmetric synaptic plasticity. The synaptic weight defined as κ 1 = κ 12 is updated accordingly to the STDP rule with the update function W 1 and κ 2 = κ 21 with W 2 (see Sec. II A), as in the case of two populations in Fig. 2.

For such a system of two coupled Hodgkin–Huxley neurons with STDP, the reduction procedure from Sec. III C can be applied numerically by using the averaging method in (17) for sufficiently large T.

Practically, the slow flow for the two Hodgkin–Huxley neurons in Fig. 6(d) is obtained using a grid in the plane of the coupling weights ( κ 1 , κ 2 ) and estimating the averages in (17) for each grid point. For this, the distribution ρ ( ξ ) of inter-spike intervals Δ T is first computed. Then, the right-hand side of the reduced systems are estimated as
W ¯ i ( κ 1 , κ 2 ) = W i STDP ( ξ ) ρ ( ξ ) d ξ .
(18)
Note that any scaling factors in (18), including δ, can be ignored as they can be scaled out by time rescaling, similar to the rescaling of the small parameter ϵ.

Importantly, the fast systems have to be computed once in order to find the distributions ρ ( ξ ) for every grid point in the κ 1 , κ 2 plane. Once these data are created, the slow flow can be computed using (18) for any update function without simulating the fast system. This makes the study of the effects of different plasticity rules much more feasible.

Figure 6(a) displays recurrent synchronization with a zoom into an asynchronous episode shown in Fig. 6(b). The reduced flow for the coupling weights (14) is presented in Fig. 6(d) along with the projection of the simulated trajectory onto the ( κ 1 , κ 2 )-plane. The recurrent synchronization, found in simulations, corresponds to a periodic trajectory in the reduced model (green line). The trajectory in Fig. 6(d) enters synchronous and asynchronous regimes recurrently. Moreover, the averaged flow (black lines) and the projected flow based on the simulation of the whole system are in very good agreement, and only small deviations near the boundary can be observed. Therefore, our proposed reduction method that eliminates the fast dynamics of the oscillators while keeping the slow dynamics of the coupling weights can also be applied to discontinuous adaptation functions.

A closer view into transition from synchronization to desynchronization can be seen in Fig. 6(c). At the end of a synchronized episode, the oscillations of the order parameter are gradually building up from small to large variations. This observation could be a useful indicator for the onset of bursting events where small variations of the order parameter may be interpreted as precursors.

A numerical bifurcation diagram for parameters τ 1 (parameter of W 1) and γ (parameter of W 2) is presented in Fig. 6(e). The parameter regions for which recurrent synchronization is observed are shown in yellow. The emergence of recurrent synchronization for different parameter regimes is clearly visible, with even two unconnected parametric regions. Therefore, the phenomenon of recurrent synchronization is observed for a wide range of parameters. The periodic motion in the hidden variables shown in Fig. 6(d) is induced by asymmetric adaptation [see Fig. 2(b)] and is not be found for symmetric adaptation rules. For comparison, Fig. 7 illustrates how the system of two Hodgkin–Huxley neurons evolves when the update functions for κ 1 and κ 2 are identical. For the considered parameter values, we observe no recurrent synchronization. Shown is the trajectory of the whole system (in green) projected onto the ( κ 1 , κ 2 )-plane with both update functions being equal to W 1 defined in Eq. (2) [Fig. 7(a)] and to W 2 defined in Eq. (3) [Fig. 7(b)]. These trajectories are in very good agreement with the reduced flow based on the averaging approach described at the beginning of this section (black lines with arrows). Both the trajectory and the flow show no presence of a periodic orbit traversing the boundary between synchronous and asynchronous spiking regimes, which would be the way recurrent synchronization appears in this representation.

FIG. 7.

Two Hodgkin–Huxley neurons with identical update functions do not display recurrent synchronization. (a) and (b): reduced flow for two Hodgkin–Huxley neurons with identical update functions of the coupling weights; a displays the case of both coupling weights being updated by W 1 given by Eq. (12) of the main text; b shows the case of identical update functions W 2 given by Eq. (13) of the main text. The green curve is the projection of the trajectory of the whole system [starting in ( κ 1 , κ 2 ) = ( 0.05 , 1 ) in both cases] onto the ( κ 1 , κ 2 )-plane and the red line marks the boundary between the synchronous (light pink) and asynchronous (gray blue) regions. (c)–(f): distributions of the timing difference between the spikes of neurons 1 and 2 (relative to the spiking times of neuron 1) for fixed coupling weights κ 2 = 0.1 and κ 1 = 0.05, 0.1, 0.2, and 0.25, which corresponds to the change from an asynchronous to a synchronous regime.

FIG. 7.

Two Hodgkin–Huxley neurons with identical update functions do not display recurrent synchronization. (a) and (b): reduced flow for two Hodgkin–Huxley neurons with identical update functions of the coupling weights; a displays the case of both coupling weights being updated by W 1 given by Eq. (12) of the main text; b shows the case of identical update functions W 2 given by Eq. (13) of the main text. The green curve is the projection of the trajectory of the whole system [starting in ( κ 1 , κ 2 ) = ( 0.05 , 1 ) in both cases] onto the ( κ 1 , κ 2 )-plane and the red line marks the boundary between the synchronous (light pink) and asynchronous (gray blue) regions. (c)–(f): distributions of the timing difference between the spikes of neurons 1 and 2 (relative to the spiking times of neuron 1) for fixed coupling weights κ 2 = 0.1 and κ 1 = 0.05, 0.1, 0.2, and 0.25, which corresponds to the change from an asynchronous to a synchronous regime.

Close modal

Figure 7 also displays exemplary distributions of the spike timing differences: Figs. 7(c) and 7(d) for asynchronous regimes, Fig. 7(e) near the boundary, and Fig. 7(f) for the synchronous regime, with the two peaks consistent with phase-locked spiking. These figures illustrate the impact the coupling weights have on the relative spiking times.

Applying the averaging procedure from Sec. III C to two coupled phase oscillators (4)–(7), we obtain the following two-dimensional system:
κ 1 = κ 1 + a W ^ 1 ( κ 1 , κ 2 ) , κ 2 = κ 2 + b W ^ 2 ( κ 1 , κ 2 ) ,
(19)
where W ^ 1 ( κ 1 , κ 2 ) = sin ( ϕ 1 ( t ) ϕ 2 ( t ) ) and W ^ 2 ( κ 1 , κ 2 ) = sin ( ϕ 2 ( t ) ϕ 1 ( t ) + β ) are the corresponding temporal averages of adaptation functions. We emphasize that the explicit dependence of the functions W ^ 1 , 2 on the coupling weights stem from the dependence of the fast dynamics ϕ 1 ( t ) and ϕ 2 ( t ) on these weights.

The functions W ^ 1 , 2 ( κ 1 , κ 2 ) are computed explicitly in  Appendices A and  B, which allow the properties of the reduced system (19) to be studied in detail. While the technical details of the reduction are described in Secs. A and B, we note here that this procedure is performed by two different methods: adiabatic elimination for such parameter values κ 1, κ 2 that the subsystem (4) and (5) is synchronized (phase-locked) and the averaging along a periodic orbit when (4) and (5) is not synchronized.

The bifurcation diagram for system (19) in Fig. 8 shows the regions for which recurrent synchronization is observed (yellow) in the ( a , b )-parameter plane for fixed parameters ω = ω 1 ω 2 and α. In the parameter region A, the recurrent synchronization is the only stable regime [Fig. 8(b)], while in region B, recurrent synchronization coexists with an effectively uncoupled state, i.e., equilibrium at ( κ 1 , κ 2 ) = ( 0 , 0 ) of the reduced system [Fig. 8(d)].

FIG. 8.

Recurrent synchronization in the phase oscillator model (4)–(7). In (a), the bifurcation diagram in the ( a , b )-parameter plane is shown. A and B denote the regions with stable recurrent synchronization, i.e., the stable limit cycle traversing the synchronous and asychronous domains in the ( κ 1 , κ 2 ) plane. Region B corresponds to the coexistence of the bursting cycle and the stable equilibrium at κ 1 = κ 2 = 0. Panels (b) and (d) show the trajectories of the reduced system (19) in the ( κ 1 , κ 2 )-plane that correspond to the time series in (c) and (e) and (f), respectively. The phase portrait d is complemented by an additional trajectory (blue) separating the basins of attraction of the recurrent synchronization limit cycle and the stable equilibrium at ( κ 1 , κ 2 ) = ( 0 , 0 ). The red line denotes the boundary between the asynchronous (gray blue) and the phase-locked (light pink) regions. Panels (c) ( a = 0.5, b = 0.07) , (e), and (f) ( a = 0.385, b = 0.125) show the time evolution of the order parameter R for recurrent synchronization (green) and the trajectory converging to the equilibrium at ( 0 , 0 ) (dark red). Parameters: ϵ = 0.0001, ω = 0.1, α = 0.25 π, β = π / 2.

FIG. 8.

Recurrent synchronization in the phase oscillator model (4)–(7). In (a), the bifurcation diagram in the ( a , b )-parameter plane is shown. A and B denote the regions with stable recurrent synchronization, i.e., the stable limit cycle traversing the synchronous and asychronous domains in the ( κ 1 , κ 2 ) plane. Region B corresponds to the coexistence of the bursting cycle and the stable equilibrium at κ 1 = κ 2 = 0. Panels (b) and (d) show the trajectories of the reduced system (19) in the ( κ 1 , κ 2 )-plane that correspond to the time series in (c) and (e) and (f), respectively. The phase portrait d is complemented by an additional trajectory (blue) separating the basins of attraction of the recurrent synchronization limit cycle and the stable equilibrium at ( κ 1 , κ 2 ) = ( 0 , 0 ). The red line denotes the boundary between the asynchronous (gray blue) and the phase-locked (light pink) regions. Panels (c) ( a = 0.5, b = 0.07) , (e), and (f) ( a = 0.385, b = 0.125) show the time evolution of the order parameter R for recurrent synchronization (green) and the trajectory converging to the equilibrium at ( 0 , 0 ) (dark red). Parameters: ϵ = 0.0001, ω = 0.1, α = 0.25 π, β = π / 2.

Close modal

Figures 8(c) and 8(e) show the time evolution of the order parameter (8) in the regime of recurrent synchronization. Interestingly, we observe the repeated occurrence of two synchronous regimes: an in-phase and an anti-phase locking interrupted with the running phase regime. In Fig. 8(f), yet another stable state (red line) is shown corresponding to desynchronization of the two oscillators leading to an effective decoupling. The corresponding states in the reduced model are displayed in the phase portraits in Figs. 8(b) and 8(d). These phase portraits of (19) show limit cycles passing through regions of synchronous and asynchronous relative phase dynamics, which explains the dynamics of the order parameter in Figs. 8(c) and 8(e).

Recurrent synchronization can also be found for other values of the asymmetry parameter β, see Fig. 12, and time-scale separation parameter ϵ, see Fig. 11. In particular, the symmetric case β = 0 reveals no recurrent synchronization. It seems that a certain amount of asymmetry in the adaptation functions is necessary to obtain a stable recurrent synchronization as the regions for recurrent synchronization lie off the lines | a | = | b | in Fig. 8(a).

FIG. 9.

Bifurcation diagram for the reduced system (10). Bifurcation diagram in the parameter plane ( a , b ) for ω = 0.1 and α = 0.25 π. The regions G1 and G2 (blue) mark areas where the equilibrium ( κ 1 , κ 2 ) = ( 0 , 0 ) is stable; C (red): a stable equilibrium on the critical manifold; A (orange): a stable recurrent synchronization limit cycle; E (green): a stable limit cycle on the critical manifold. The other regions mark areas of bistability of the equilibria described above. Also shown are analytically obtained bifurcation lines for the whole system (red) and the (0,0) equilibrium in the reduced system (blue), with SN denoting a Saddle-Node bifurcation, PF a Pitchfork bifurcation and H a Hopf bifurcation (dashed lines). The boundaries for regions A, B, E, and F are obtained numerically. The diagrams in the two columns on the right and the row in the bottom show exemplary trajectories for the reduced system in the ( κ 1 , κ 2 ) plane for the regions denoted in the bifurcation diagram. Black lines mark stable trajectories, blue lines mark unstable trajectories, and green lines mark the limit cycle. The (0,0) equilibrium can either be stable (red and filled) or unstable (red and void). The blue areas denote the basins of attraction of the (0,0) equilibrium in the case of bistability. Green dots indicate stable fixed points on the critical manifold and the red line marks the boundary of the critical manifold.

FIG. 9.

Bifurcation diagram for the reduced system (10). Bifurcation diagram in the parameter plane ( a , b ) for ω = 0.1 and α = 0.25 π. The regions G1 and G2 (blue) mark areas where the equilibrium ( κ 1 , κ 2 ) = ( 0 , 0 ) is stable; C (red): a stable equilibrium on the critical manifold; A (orange): a stable recurrent synchronization limit cycle; E (green): a stable limit cycle on the critical manifold. The other regions mark areas of bistability of the equilibria described above. Also shown are analytically obtained bifurcation lines for the whole system (red) and the (0,0) equilibrium in the reduced system (blue), with SN denoting a Saddle-Node bifurcation, PF a Pitchfork bifurcation and H a Hopf bifurcation (dashed lines). The boundaries for regions A, B, E, and F are obtained numerically. The diagrams in the two columns on the right and the row in the bottom show exemplary trajectories for the reduced system in the ( κ 1 , κ 2 ) plane for the regions denoted in the bifurcation diagram. Black lines mark stable trajectories, blue lines mark unstable trajectories, and green lines mark the limit cycle. The (0,0) equilibrium can either be stable (red and filled) or unstable (red and void). The blue areas denote the basins of attraction of the (0,0) equilibrium in the case of bistability. Green dots indicate stable fixed points on the critical manifold and the red line marks the boundary of the critical manifold.

Close modal
FIG. 10.

Period of the recurrent synchronization limit cycle. The period T of the recurrent synchronization limit cycle (color-coded) in the parameter-plane ( a , b ) for ω = 0.1, α = 0.25 π, β = 0.5 π, and ϵ = 0.0001. The light areas correspond to parameter combinations where no recurrent synchronization can be observed.

FIG. 10.

Period of the recurrent synchronization limit cycle. The period T of the recurrent synchronization limit cycle (color-coded) in the parameter-plane ( a , b ) for ω = 0.1, α = 0.25 π, β = 0.5 π, and ϵ = 0.0001. The light areas correspond to parameter combinations where no recurrent synchronization can be observed.

Close modal
FIG. 11.

Influence of time scale separation. Periods of recurrent synchronization for ϵ = 0.0001 (panel a), ϵ = 0.001 (panel b), ϵ = 0.005 (panel c), and ϵ = 0.01 (panel d). Remaining parameters: a = 0.5, b = 0.1, ω = 0.1, α = 0.25 π, and β = 0.5 π.

FIG. 11.

Influence of time scale separation. Periods of recurrent synchronization for ϵ = 0.0001 (panel a), ϵ = 0.001 (panel b), ϵ = 0.005 (panel c), and ϵ = 0.01 (panel d). Remaining parameters: a = 0.5, b = 0.1, ω = 0.1, α = 0.25 π, and β = 0.5 π.

Close modal
FIG. 12.

Bursting ratio as a function of asymmetry β. Ratio of stable recurrent synchronization to all states in the system of two phase oscillators (6)–(9) in the ( a , b )-grid shown in Fig. 9 for different values of β. Other parameters: ϵ = 0.0001 ω = 0.1, and α = 0.25 π.

FIG. 12.

Bursting ratio as a function of asymmetry β. Ratio of stable recurrent synchronization to all states in the system of two phase oscillators (6)–(9) in the ( a , b )-grid shown in Fig. 9 for different values of β. Other parameters: ϵ = 0.0001 ω = 0.1, and α = 0.25 π.

Close modal

The bifurcation analysis reveals that the emergence of recurrent synchronization is the result of a fold bifurcation of the limit cycle of the reduced system, as can be seen in Fig. 9, which shows a much larger area in the ( a , b)-parameter plane. Here, all regions with distinct stable states are displayed with exemplary trajectories for the reduced system. Furthermore, analytical bifurcation lines are drawn in red and blue, with the dashed lines corresponding to Hopf bifurcations and the solid lines to Saddle-Node- or Pitchfork bifurcations. The derivation of these lines can be found in  Appendix C. The diagram describing the border between regions G2 and B shows the trajectories after the annihilation of the two limit cycles with only the ( κ 1 , κ 2 ) = ( 0 , 0 ) point remaining as a stable state. Recurrent synchronization can also appear, when the synchronous relative phase dynamics become unstable. This can either happen, when a “slow” limit cycle, i.e., a limit cycle with changes in the relative phase dynamics on the order of ϵ, crosses the boundary between the synchronous and asynchronous regime (transitions from region E to A and F to B, respectively) or via the destabilization of a stable fixed point at the boundary between the two regimes.

To check whether the recurrent synchronization limit cycle contains additional information about the bifurcation present in the system, the periods are shown in Fig. 10. Here, the periods are color-coded in the parameter plane ( a . b ), with the light areas signifying no recurrent synchronization. We can identify higher periods for lower values of a and b and vice versa. Since a and b denote the amplitude of the phase influence on the dynamics of the coupling weights, they, therefore, impact the speed with which trajectories move in the reduced phase space of the coupling weights ( κ 1 , κ 2 ). This yields an inverse relationship between the values of a, b and the period of the limit cycle.

In this work, we have described recurrent synchronization for two populations of adaptively coupled oscillators. The phenomenon is shown to be robust to noise, variation of parameters, initial states, and heterogeneity of oscillators.

To shed light on the emergence of recurrent synchronization, we have introduced a mean-field model approach. A detailed bifurcation analysis of a simplified phenomenological phase oscillator model provides remarkable insights into the mechanisms leading to recurrent synchronization. The separation between the time scales of the adaptation and the fast individual dynamics allows for the model reduction in the case of two phase oscillators and two Hodgkin–Huxley neurons. The proposed reduction approach is applicable to systems beyond phase oscillators and, in this regard, it methodologically generalizes recently used methods from Ratas et al.60 and Franović et al.61 Our study is further complemented by an analysis of a mean-field model consisting of coupled Hodgkin–Huxley neurons equipped with spike-timing-dependent plasticity. For this more complicated system, our numerical averaging approach makes the study of the effects of plasticity on neuronal dynamics much more feasible.

Heterogeneity in the adaptation rule has been known to exist, e.g., in neural systems for a long time,62 but their dynamical effects are only rarely studied and analytical methods are widely unexplored.22,23

Our findings contribute to the important question how heterogeneity may change the dynamics of complex networks. We have found that such heterogeneity in the adaptivity is able to significantly change the macroscopic dynamics of a dynamical network. Moreover, our analysis contributes to the identification of the onset of critical phenomena and extreme events. In the transition from synchrony to asynchrony, we have observed gradually growing oscillations of the order parameter. This observation may be used as characterizing feature for the onset of the bursting period and, thus, may indicate substantial changes in the properties of the network.

Our results may contribute to the understanding of pattern generators, their disease-related impairment, and, specifically, the origin of Parkinsonian resting tremor. So far, the mechanism underlying the generation of Parkinsonian resting tremor remains an open question.63–65 Neuronal discharges in the tremor frequency range in different parts of the basal ganglia as well as the thalamus were found to be coherent and transiently locked to the peripheral tremor.66 Furthermore, causal data analysis suggests that this activity drives the muscular tremor, as opposed to being just driven by the sensory feedback from the periphery.67 In functional magnetic resonance imaging (fMRI) studies, it was shown that basal ganglia get active transiently when tremor episodes emerge, whereas activity in the cerebello-thalamo-cortical circuit is tightly related to the tremor amplitude.65 According to the “dimmer-switch” hypothesis, the basal ganglia activate the tremor (like a light switch) and the cerebello-thalamo-cortical circuit modulates tremor amplitude (like a light dimmer).65,68 However, it remains elusive what causes spontaneous switching between epochs with alternating (anti-phase) tremor and synchronous (in-phase) tremor, connected by epochs without stable phase relationship.63,69 Based on our results, we suggest that asymmetric adaptivity involving two populations (Fig. 2) responsible for two antagonistic muscles might cause the clinically observed recurrent epochs of synchrony with different phase differences, e.g., in-phase and anti-phase tremor epochs. In that sense, asymmetric adaptivity might serve as a complex dynamical switching mechanism. In contrast, two anatomically intermingled populations (Fig. 1) activating the same muscle would display recurrent epochs of synchrony, interconnected by bursts of mass synchrony, potentially translating to pronounced tremor epochs, intersected by epochs of, e.g., less coherent twitches, lacking distinct tremor bursts.

Bursting, as it has been discussed in the literature, e.g., for individual neurons, is a phenomenon consisting of episodes of activity and inactivity.34 On the other hand, recurrent synchronization, as it has been introduced in this work, is a phenomenon arising from the interaction of oscillators in a complex dynamical network. It is characterized by bursting on the macroscopic level rather than on the level of individual microscopic dynamical units.

While this work is exemplified by using the Hodgkin–Huxley model to show the generality of our findings, the theory developed is not restricted to the field neuroscience. Moreover, we note that the model used in this work represents a phenomenological model for a complex dynamical system inspired by neuroscience rather than a realistic network model. As we show, recurrent synchronization is a result of the interplay of a coupling structure with the adaptation on a slower time scale. The “hidden” adaptation variables play the role of the slowly changing control parameters triggering the recurrent synchronization events. Recurrent synchronization induced by the interplay of multiple timescales and complex dynamical networks is inherent in many dynamical networks modeling, e.g., social interactions or climate tipping dynamics.70,71 Bursting phenomena are also known dynamical states found outside the field of neuroscience.72 

This work was supported by the German Research Foundation DFG, Project Nos. 411803875 and 440145547. P.A.T. gratefully acknowledges the funding support of this study by the Vaughn Bryson Research Fund and the John A. Blume Foundation.

The authors have no conflicts to disclose.

Max Thiele: Formal analysis (equal); Investigation (equal); Software (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Rico Berner: Conceptualization (equal); Investigation (equal); Methodology (equal); Supervision (equal); Writing – review & editing (equal). Peter A. Tass: Investigation (equal); Writing – review & editing (equal). Eckehard Schöll: Investigation (equal); Project administration (equal); Supervision (equal); Writing – review & editing (equal). Serhiy Yanchuk: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Supervision (equal); Writing – review & editing (equal).

The data that support the findings of this study are openly available in GitHub, Ref. 73.

In Sec. III C of the main text, we introduced the concept of adiabatic elimination and showed the resulting flow for system (6)–(9) of the main text. In the following, we provide a derivation of this flow for the following adaptation functions: W 1 = a sin θ κ 1 and W 2 = b cos θ κ 2. We start with reducing the model (6)–(9) of the main text by introducing the phase difference θ = ϕ 1 ϕ 2 and the frequency difference ω = ω 1 ω 2,
θ ˙ = ω κ 1 sin ( θ + α ) κ 2 sin ( θ α ) ,
(A1a)
κ ˙ 1 = ϵ ( a sin θ + κ 1 ) ,
(A1b)
κ ˙ 2 = ϵ ( b cos θ + κ 2 ) .
(A1c)
To determine the critical manifold, we set the (fast) layer equation Eq. (A1a) to zero,
0 = ω ( κ 1 sin ( θ + α ) + κ 2 sin ( θ α ) ) = ω c 1 sin θ c 2 cos θ ,
with c 1 = ( κ 1 + κ 2 ) cos α and c 2 = ( κ 1 κ 2 ) sin α. This yields the condition that describes the critical manifold,
0 = ω A ( κ 1 , κ 2 ) sin ( θ + γ ( κ 1 , κ 2 ) ) ,
(A2)
with A = c 1 2 + c 2 2 and γ = arg ( c 1 + i c 2 ). For ω < A, this yields two possible solutions: θ 1 = arcsin ω A γ and θ 2 = π arcsin ω A γ with θ 1 being stable and θ 2 being unstable. Here, the stability is determined with respect to the fast dynamics (A1a). Hence, the slow flow on the stable part of the critical manifold reads
κ ˙ 1 = ϵ ( a sin ( arcsin ω A γ ) + κ 1 ) , κ ˙ 2 = ϵ ( b cos ( arcsin ω A γ ) + κ 2 ) .
(A3)
For the case | ω | > A, there is no equilibrium solution to Eq. (A2). Instead, with fixed coupling weights, the layer Eq. (A1a) exhibits oscillations with frequency Ω = ω 2 A 2, see Ref. 61. The averaged flow of the coupling weights is obtained by averaging the adaptation functions over the period T of the fast oscillations,
κ i = 1 T 0 T W i ( θ ( t ) ) d t .
Here, κ 1 and κ 2 are assumed to be constant, and θ ( t ) is the solution to (A1a), which depends implicitly on κ 1 and κ 2. For the integral, we can use the layer Eq. (A1a) to change the integration variable to θ. This yields the following expressions W 1 = a sin θ κ 1 and W 2 = b cos θ κ 2, and hence,
κ 1 = ( κ 1 Ω a 0 2 π sin θ d θ ω c 1 sin θ c 2 cos θ ) ,
(B1a)
κ 2 = ( κ 2 + Ω b 0 2 π cos θ d θ ω c 1 sin θ c 2 cos θ ) .
(B1b)
The integrals are analytically solvable, and we show it with the example of Eq. (B1a). We rewrite the integral term in the following form:
0 2 π ( e i θ e i θ ) d θ 2 i ω c 1 ( e i θ e i θ ) i c 2 ( e i θ + e i θ ) = S 1 z d z 2 ω z + i c 1 ( z 2 1 ) c 2 ( z 2 + 1 ) S 1 y d y 2 ω y + i c 1 ( 1 y 2 ) c 2 ( 1 + y 2 ) ,
(B2)
with z = e i θ and y = e i θ. Both integrals can be solved using the residue theorem that we show for the integral containing z. The poles of the integrated function are
z ± = 1 ( i c 1 c 2 ) ( ω ± ω 2 c 1 2 c 2 2 ) .
Since the residues must lie within the 1-circle, we compute their absolute value; recall that A 2 = c 1 2 + c 2 2,
| z ± | = 1 A | ( ω ± ω 2 A 2 ) | .
For ω > 0, the solution z + is located within the unit circle | z + | 1. We can, therefore, rewrite the first integral in Eq. (B2) in the following way:
S 1 z d z 2 ω z + i c 1 ( z 2 1 ) c 2 ( z 2 + 1 ) = S 1 z d z ( i c 1 c 2 ) ( z z + ) ( z z ) = 2 π i ( i c 1 c 2 ) z + z + z = π i ( i c 1 c 2 ) ( ω ω 2 A 2 ) ω 2 A 2 .
(B3)
For the second integral in Eq. (B2), the same procedure yields
S 1 y d y 2 ω y + i c 1 ( 1 y 2 ) c 2 ( 1 + y 2 ) = π i ( i c 1 + c 2 ) ( ω ω 2 A 2 ) ω 2 A 2 .
(B4)
We can now write Eq. (B2) as
π i ( i c 1 c 2 ) ( ω ω 2 A 2 ) ω 2 A 2 + π i ( i c 1 + c 2 ) ( ω ω 2 A 2 ) ω 2 A 2 = 2 π c 1 A 2 ( ω ω 2 A 2 ) ω 2 A 2 ,
and using T = 2 π ω 2 A 2, the averaged flow for κ 1 reads
κ 1 = c 1 a A 2 ( ω ω 2 A 2 ) κ 1 .
(B5)
The same procedure can be used for Eq. (B1b) and yields
κ 2 = c 2 b A 2 ( ω ω 2 A 2 ) κ 2 .
(B6)
The bifurcation lines (red and blue), shown in the bifurcation diagram in Fig. 9, result from the stability analysis of the fixed points in the original system (red lines) and of the ( κ 1 , κ 2 ) = ( 0 , 0 ) fixed point in the averaged system (10). To compute the red bifurcation lines, we first compute the Jacobian J of the equilibrium of the original system Eq. (A1),
J = ( κ 1 cos ( α + θ ) κ 2 cos ( α θ ) sin ( α + θ ) sin ( α θ ) a ϵ cos θ ϵ 0 b ϵ sin θ 0 ϵ ) ,
and the corresponding characteristic equation
0 = ( κ 1 cos ( α + θ ) κ 2 cos ( α θ ) λ ) ( ϵ + λ ) 2 + sin ( α θ ) b ϵ sin θ ( ϵ + λ ) sin ( α + θ ) a ϵ cos θ ( ϵ + λ ) .
(C1)
For the bifurcation lines, the real part of an eigenvalue has to be zero, i.e., λ = i ξ,
0 = ( κ 1 cos ( α + θ ) κ 2 cos ( α θ ) i ξ ) ( ϵ + i ξ ) 2 + sin ( α θ ) b ϵ sin θ ( ϵ + i ξ ) sin ( α + θ ) a ϵ cos θ ( ϵ + i ξ ) .
(C2)
Setting ξ = 0 allows us to determine the bifurcation lines for the real eigenvalues:
0 = a sin θ cos ( α + θ ) b cos θ cos ( α θ ) + b sin θ sin ( α θ ) + a cos θ sin ( α + θ ) , 0 = a sin ( 2 θ + α ) b cos ( 2 θ α ) .
For the case ( 2 θ α ) 0 and a 0, this results in the following system:
a = 2 ω + b cos θ sin ( θ α ) sin θ sin ( θ + α ) , b = 2 ω sin ( 2 θ + α ) sin θ sin ( θ + α ) cos ( 2 θ α ) cos θ sin ( θ α ) sin ( 2 θ + α ) .
Here, we have used the fixed point conditions from Eq. (A1) to substitute κ 1, κ 2 and subsequently a. For the case of cos ( 2 θ α ) = 0 and α π / 4, this yields two points,
a = 0 , b ± = 4 ω sin α ± 1 .
For α = π / 4, sin ( 2 θ + α ) equals cos ( 2 θ α ), which yields two lines in the case of a b,
b ± = 4 ω a ( 1 2 1 ) 1 2 ± 1 .
For a = b, there exists one solution with a = 2 2 ω and b = 2 2 ω. This analysis yields the solid red lines in the bifurcation diagram of Fig. 9. For the case ξ 0, Eq. (C2) yields a complex valued equation. The real part of the equation is
0 = ( a sin θ cos ( α + θ ) + b cos θ cos ( α θ ) ) ( ϵ 2 ξ 2 ) + 2 ϵ ξ 2 ϵ 2 b sin ( θ ) sin ( α θ ) + ϵ 2 a cos ( θ ) sin ( α + θ ) .
(C3)
We can determine ξ 2 by using the equation for the imaginary part
ξ 2 = ϵ ( a sin θ cos ( α + θ ) b cos θ cos ( α θ ) + a sin ( 2 θ + α ) b cos ( 2 θ α ) ) + ϵ 2 .
(C4)
Plugging the fixed point condition for θ [Eq. (A1)] and Eq. (C4) into Eq. (C3) yields a non-linear equation for b. This can be solved numerically for every θ. The results are shown as dashed red lines in the bifurcation diagram in Fig. 9. The same procedure was done for the ( κ 1 , κ 2 ) = ( 0 , 0 ) fixed point in the averaging regime [Eqs. (B5) and (B6)] resulting in blue lines. For the case of ξ = 0 (solid blue lines in Fig. 9), this yields the following relation between a and b:
b = a cos α + 2 ω a ω cos α sin α + sin α .
For the case of ξ 0, the relation between a and b can also be given explicitly as long as the condition a b sin α cos α 2 ω 2 a cos α + b sin α 2 ω + 1 0 holds. This results in
b = a cos α sin α + 4 ω sin α ,
yielding the dashed blue lines in the bifurcation diagram of Fig. 9.
1.
T.
Gross
,
C. J. D.
D’Lima
, and
B.
Blasius
, “
Epidemic dynamics on an adaptive network
,”
Phys. Rev. Lett.
96
,
208701
(
2006
).
2.
T.
Gross
and
B.
Blasius
, “
Adaptive coevolutionary networks: A review
,”
J. R. Soc. Interface
5
,
259
271
(
2008
).
3.
S.
Jain
and
S.
Krishna
, “
A model for the emergence of cooperation, interdependence, and structure in evolving networks
,”
Proc. Natl. Acad. Sci. U.S.A.
98
,
543
547
(
2001
).
4.
R.
Gutiérrez
,
A.
Amann
,
S.
Assenza
,
J.
Gómez-Gardeñes
,
V.
Latora
, and
S.
Boccaletti
, “
Emerging meso- and macroscales from synchronization of adaptive networks
,”
Phys. Rev. Lett.
107
,
234103
(
2011
).
5.
H.
Sayama
,
I.
Pestov
,
J.
Schmidt
,
B. J.
Bush
,
C.
Wong
,
J.
Yamanoi
, and
T.
Gross
, “
Modeling complex systems with adaptive networks
,”
Comput. Math. Appl.
65
,
1645
1664
(
2013
).
6.
V.
Buskens
and
A.
Van de Rijt
, “
Dynamics of networks if everyone strives for structural holes
,”
Am. J. Sociol.
14
,
371
(
2008
).
7.
T. V. P.
Bliss
and
G. L.
Collingridge
, “
A synaptic model of memory: Long-term potentiation in the hippocampus
,”
Nature
361
,
31
39
(
1993
).
8.
H.
Markram
,
J.
Lübke
,
M.
Frotscher
, and
B.
Sakmann
, “
Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs
,”
Science
275
,
213
215
(
1997
).
9.
L. F.
Abbott
and
S.
Nelson
, “
Synaptic plasticity: Taming the beast
,”
Nat. Neurosci.
3
,
1178
1183
(
2000
).
10.
G. Q.
Bi
and
M. M.
Poo
, “
Synaptic modification by correlated activity: Hebb’s postulate revisited
,”
Annu. Rev. Neurosci.
24
,
139
166
(
2001
).
11.
N.
Caporale
and
Y.
Dan
, “
Spike timing-dependent plasticity: A Hebbian learning rule
,”
Annu. Rev. Neurosci.
31
,
25
46
(
2008
).
12.
H.
Câteau
,
K.
Kitano
, and
T.
Fukai
, “
Interplay between a phase response curve and spike-timing-dependent plasticity leading to wireless clustering
,”
Phys. Rev. E
77
,
051909
(
2008
).
13.
C.
Clopath
,
L.
Büsing
,
E.
Vasilaki
, and
W.
Gerstner
, “
Connectivity reflects coding: A model of voltage-based STDP with homeostasis
,”
Nat. Neurosci.
13
,
344
352
(
2010
).
14.
T. H.
Brown
,
P. F.
Chapman
,
E. W.
Kairiss
, and
C. L.
Keenan
, “
Long-term synaptic potentiation
,”
Science
242
,
724
728
(
1988
).
15.
P. A.
Tass
and
M.
Majtanik
, “
Long-term anti-kindling effects of desynchronizing brain stimulation: A theoretical study
,”
Biol. Cybern.
94
,
58
66
(
2006
).
16.
M.
Butz
and
A.
van Ooyen
, “
A simple rule for dendritic spine and axonal bouton formation can account for cortical reorganization after focal retinal lesions
,”
PLoS Comput. Biol.
9
,
e1003259
(
2013
).
17.
A.
Gavalda
,
J.
Duch
, and
J.
Gómez-Gardeñes
, “
Reciprocal interactions out of congestion-free adaptive networks
,”
Phys. Rev. E
85
,
026112
(
2012
).
18.
F.
Müller-Hansen
,
M.
Schlüter
,
M.
Mäs
,
J. F.
Donges
,
J. J.
Kolb
,
K.
Thonicke
, and
J.
Heitzig
, “
Towards representing human behavior and decision making in earth system models—An overview of techniques and approaches
,”
Earth Syst. Dyn.
8
,
977
(
2017
).
19.
S.
Nuwagaba
,
F.
Zhang
, and
C.
Hui
, “
Robustness of rigid and adaptive networks to species loss
,”
PLoS One
12
,
e0189086
(
2017
).
20.
R. C.
Froemke
,
M. M.
Poo
, and
Y.
Dan
, “
Spike-timing-dependent synaptic plasticity depends on dendritic location
,”
Nature
434
,
221
(
2005
).
21.
P. J.
Sjöström
and
M.
Häusser
, “
A cooperative switch determines the sign of synaptic plasticity in distal dendrites of neocortical pyramidal neurons
,”
Neuron
51
,
227
(
2006
).
22.
D. V.
Kasatkin
and
V. I.
Nekorkin
, “
Transient circulant clusters in two-population network of kuramoto oscillators with different rules of coupling adaptation
,”
Chaos
31
,
073112
(
2021
).
23.
R.
Berner
and
S.
Yanchuk
, “
Synchronization in networks with heterogeneous adaptation rules and applications to distance-dependent synaptic plasticity
,”
Front. Appl. Math. Stat.
7
,
714978
(
2021
).
24.
I. V.
Belykh
and
M.
Hasler
, “
Mesoscale and clusters of synchrony in networks of bursting neurons
,”
Chaos
21
,
016106
(
2011
).
25.
P. A.
Tass
and
O. V.
Popovych
, “
Unlearning tinnitus-related cerebral synchrony with acoustic coordinated reset stimulation: Theoretical concept and modelling
,”
Biol. Cybern.
106
,
27
36
(
2012
).
26.
I. V.
Belykh
and
A.
Shilnikov
, “
When weak inhibition synchronizes strongly desynchronizing networks of bursting neurons
,”
Phys. Rev. Lett.
101
,
078102
(
2008
).
27.
W.
Gerstner
,
W. M.
Kistler
,
R.
Naud
, and
L.
Paninski
,
Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition
(
Cambridge University Press
,
2014
).
28.
D. A.
Blank
and
R.
Stoop
, “
Collective bursting in populations of intrinsically nonbursting neurons
,”
Z. Naturforsch. A
54
,
617
(
1999
).
29.
R.
Stoop
,
D.
Blank
,
A.
Kern
,
J.
Van der Vyver
,
M.
Christen
,
S.
Lecchini
, and
C.
Wagner
, “
Collective bursting in layer IV synchronization by small thalamic inputs and recurrent connections
,”
Brain Res. Cogn. Brain Res.
13
,
293
(
2002
).
30.
A.
Panchuk
,
D. P.
Rosin
,
P.
Hövel
, and
E.
Schöll
, “
Synchronization of coupled neural oscillators with heterogeneous delays
,”
Int. J. Bifurc. Chaos
23
,
1330039
(
2013
).
31.
E.
Schöll
,
G.
Hiller
,
P.
Hövel
, and
M. A.
Dahlem
, “
Time-delayed feedback in neurosystems
,”
Philos. Trans. R. Soc. A
367
,
1079
1096
(
2009
).
32.
R.
Gast
,
H.
Schmidt
, and
T. R.
Knösche
, “
A mean-field description of bursting dynamics in spiking neural networks with short-term adaptation
,”
Neural Comput.
32
,
1615
(
2020
).
33.
M.
Ciszak
,
F.
Marino
,
A.
Torcini
, and
S.
Olmi
, “
Emergent excitability in populations of nonexcitable units
,”
Phys. Rev. E
102
,
050201(R)
(
2020
).
34.
E. M.
Izhikevich
, “
Neural excitability, spiking and bursting
,”
Int. J. Bifurc. Chaos
10
,
1171
1266
(
2000
).
35.
A.
Byrne
,
J.
Ross
,
R.
Nicks
, and
S.
Coombes
, “
Mean-field models for EEG/MEG: From oscillations to waves
,”
Brain Topogr.
35
,
36
53
(
2022
).
36.
I.
Franović
,
S.
Eydam
,
S.
Yanchuk
, and
R.
Berner
, “
Collective activity bursting in networks of excitable systems adaptively coupled to a pool of resources
,”
Front. Netw. Physiol.
2
,
841829
(
2022
).
37.
T.
Nishikawa
and
A. E.
Motter
, “
Symmetric states requiring system asymmetry
,”
Phys. Rev. Lett.
117
,
114101
(
2016
).
38.
F.
Molnar
,
T.
Nishikawa
, and
A. E.
Motter
, “
Network experiment demonstrates converse symmetry breaking
,”
Nat. Phys.
16
,
351
356
(
2020
).
39.
F.
Molnar
,
T.
Nishikawa
, and
A. E.
Motter
, “
Asymmetry underlies stability in power grids
,”
Nat. Commun.
12
,
1457
(
2021
).
40.
A. L.
Hodgkin
and
A. F.
Huxley
, “
A quantitative description of membrane current and its application to conduction and excitation in nerve
,”
J. Physiol.
117
,
500
544
(
1952
).
41.
D.
Hansel
,
G.
Mato
, and
C.
Meunier
, “
Phase dynamics for weakly coupled Hodgkin-Huxley neurons
,”
Europhys. Lett.
23
,
367
372
(
1993
).
42.
S. Y.
Kim
and
W.
Lim
, “
Effect of interpopulation spike-timing-dependent plasticity on synchronized rhythms in neuronal networks with inhibitory and excitatory populations
,”
Cogn. Neurodyn.
14
,
535
(
2020
).
43.
V.
Röhr
,
R.
Berner
,
E. L.
Lameu
,
O. V.
Popovych
, and
S.
Yanchuk
, “
Frequency cluster formation and slow oscillations in neural populations with plasticity
,”
PLoS ONE
14
,
e0225094
(
2019
).
44.
A.
Pikovsky
,
M.
Rosenblum
, and
J.
Kurths
,
Synchronization: A Universal Concept in Nonlinear Sciences
,
1st ed.
(
Cambridge University Press
,
Cambridge
,
2001
).
45.
Y.
Kuramoto
,
Chemical Oscillations, Waves and Turbulence
(
Springer-Verlag
,
Berlin
,
1984
).
46.
D. V.
Kasatkin
,
S.
Yanchuk
,
E.
Schöll
, and
V. I.
Nekorkin
, “
Self-organized emergence of multi-layer structure and chimera states in dynamical networks with adaptive couplings
,”
Phys. Rev. E
96
,
062211
(
2017
).
47.
R.
Berner
,
E.
Schöll
, and
S.
Yanchuk
, “
Multiclusters in networks of adaptively coupled phase oscillators
,”
SIAM J. Appl. Dyn. Syst.
18
,
2227
2266
(
2019
).
48.
M.
Madadi Asl
,
A.
Valizadeh
, and
P. A.
Tass
, “
Dendritic and axonal propagation delays may shape neuronal networks with plastic synapses
,”
Front. Physiol.
9
,
1849
(
2018
).
49.
E.
Marder
and
V.
Thirumalai
, “
Cellular, synaptic and network effects of neuromodulation
,”
Neural Netw.
15
,
479
(
2002
).
50.
F.
Zeldenrust
,
W. J.
Wadman
, and
B.
Englitz
, “
Neural coding with bursts—Current state and future perspectives
,”
Front. Comput. Neurosci.
12
,
48
(
2018
).
51.
M.
Komarov
and
A.
Pikovsky
, “
Effects of nonresonant interaction in ensembles of phase oscillators
,”
Phys. Rev. E
84
,
016210
(
2011
).
52.
M.
Komarov
and
A.
Pikovsky
, “
Dynamics of multifrequency oscillator communities
,”
Phys. Rev. Lett.
110
,
134101
(
2013
).
53.
D. S.
Bassett
,
P.
Zurn
, and
J. I.
Gold
, “
On the nature and use of models in network neuroscience
,”
Nat. Rev. Neurosci.
19
,
566
578
(
2018
).
54.
J. C.
Gonzalez-Avella
,
M. G.
Cosenza
, and
M. S.
Miguel
, “
Localized coherence in two interacting populations of social agents
,”
Physica A
399
,
24
30
(
2014
).
55.
V.
Benndorf
,
I.
Martinez-Martinez
, and
H.
Normann
, “
Equilibrium selection with coupled populations in hawk-dove games: Theory and experiment in continuous time
,”
J. Econ. Theory
165
,
472
(
2016
).
56.
O. V.
Popovych
,
S.
Yanchuk
, and
P. A.
Tass
, “
Self-organized noise resistance of oscillatory neural networks with spike timing-dependent plasticity
,”
Sci. Rep.
3
,
2926
(
2013
).
57.
C.
Kuehn
,
Multiple Time Scale Dynamics
(
Springer
,
Cham
,
2015
).
58.
M.
Desroches
,
J.
Guckenheimer
,
B.
Krauskopf
,
C.
Kuehn
,
H. M.
Osinga
, and
M.
Wechselberger
, “
Mixed-mode oscillations with multiple time scales
,”
SIAM Rev.
54
,
211
288
(
2012
).
59.
J. A.
Sanders
,
F.
Verhulst
, and
J.
Murdock
,
Averaging Methods in Nonlinear Dynamical Systems
(
Springer
,
New York
,
2007
).
60.
I.
Ratas
,
K.
Pyragas
, and
P. A.
Tass
, “
Multistability in a star network of Kuramoto-type oscillators with synaptic plasticity
,”
Sci. Rep.
11
,
9840
(
2021
).
61.
I.
Franović
,
S.
Yanchuk
,
S.
Eydam
,
I.
Bačić
, and
M.
Wolfrum
, “
Dynamics of a stochastic excitable system with slowly adapting feedback
,”
Chaos
30
,
083109
(
2020
).
62.
J. J.
Letzkus
,
B. M.
Kampa
, and
G. J.
Stuart
, “
Learning rules for spike timing-dependent plasticity depend on dendritic synapse location
,”
J. Neurosci.
26
,
10420
(
2006
).
63.
J.
Raethjen
,
M.
Lindemann
,
H.
Schmaljohann
,
R.
Wenzelburger
,
G.
Pfister
, and
G.
Deuschl
, “
Multiple oscillators are causing parkinsonian and essential tremor
,”
Mov. Disord.
15
,
1
(
2000
).
64.
H.
Ben-Pazi
,
H.
Bergman
,
J. A.
Goldberg
,
N.
Giladi
,
D.
Hansel
,
A.
Reches
, and
E. S.
Simon
, “
Synchrony of rest tremor in multiple limbs in Parkinson’s disease: Evidence for multiple oscillators
,”
J. Neural Transm.
108
,
287
(
2001
).
65.
R. C.
Helmich
, “
The cerebral basis of parkinsonian tremor: A network perspective
,”
Mov. Disord.
33
,
2
(
2017
).
66.
P.
Brown
, “
Oscillatory nature of human basal ganglia activity: Relationship to the pathophysiology of Parkinson’s disease
,”
Mov. Disord.
18
,
4
(
2003
).
67.
P.
Tass
,
D.
Smirnov
,
A.
Karavaev
,
U.
Barnikol
,
T.
Barnikol
,
I.
Adamchic
,
C.
Hauptmann
,
N.
Pawelcyzk
,
M.
Maarouf
,
V.
Sturm
,
H. J.
Freund
, and
B.
Bezruchko
, “
The causal relationship between subcortical local field potential oscillations and Parkinsonian resting tremor
,”
J. Neural Eng.
7
,
016009
(
2010
).
68.
R. C.
Helmich
,
M.
Hallett
,
G.
Deuschl
,
I.
Toni
, and
B. R.
Bloem
, “
Cerebral causes and consequences of parkinsonian resting tremor: A tale of two circuits?
,”
Brain
135
,
3206
(
2012
).
69.
G. H.
Bishop
,
M. H.
Clare
, and
J.
Price
, “
Patterns of tremor in normal and pathological conditions
,”
J. Appl. Physiol.
1
,
2
(
1948
).
70.
P.
Ashwin
,
C.
Perryman
, and
S.
Wieczorek
, “
Parameter shifts for nonautonomous systems in low dimension: Bifurcation- and rate-induced tipping
,”
Nonlinearity
30
,
2185
(
2017
).
71.
A. S.
von der Heydt
,
P.
Ashwin
,
C. D.
Camp
,
M.
Crucifix
,
H. A.
Dijkstra
,
P.
Ditlevsen
, and
T. M.
Lenton
, “
Quantification and interpretation of the climate variability record
,”
Glob. Planet. Change
197
,
103399
(
2021
).
72.
X.
Tan
,
Y.
Tang
,
T.
Lian
,
Z.
Yao
, and
D.
Chen
, “
A study of the effects of westerly wind bursts on ENSO based on CESM
,”
Clim. Dyn.
54
,
885
(
2020
).
73.
M.
Thiele
, “
Recurrent Synchronization
,” GitHub (
2022
). https://github.com/maxthiele/RecurrentSynchronization