Synaptic plasticity plays a fundamental role in neuronal dynamics, governing how connections between neurons evolve in response to experience. In this study, we extend a network model of θ-neuron oscillators to include a realistic form of adaptive plasticity. In place of the less tractable spike-timing-dependent plasticity, we employ recently validated phase-difference-dependent plasticity rules, which adjust coupling strengths based on the relative phases of θ-neuron oscillators. We explore two distinct implementations of this plasticity: pairwise updates to individual coupling strengths and global updates applied to the mean coupling strength. We derive a mean-field approximation and assess its accuracy by comparing it to θ-neuron simulations across various stability regimes. The synchrony of the system is quantified using the Kuramoto order parameter. Through bifurcation analysis and the calculation of maximal Lyapunov exponents, we uncover interesting phenomena such as bistability and chaotic dynamics via period-doubling and boundary crisis bifurcations. These behaviors emerge as a direct result of adaptive coupling and are absent in systems without such plasticity.

A population of coupled neurons with synaptic plasticity may be modeled as an adaptive dynamical network consisting of a very large number of excitable nodes and/or oscillators. One approach to circumvent the complexity of the full network is to consider the mean-field dynamics. Here, we introduce a mean-field model for a population of θ-neurons that are adaptively coupled according to a phase-difference-dependent plasticity rule. In this context, the mean-field dynamics describe important properties of neural synchrony and conductance. We analyze the validity of the model and discover a rich tapestry of dynamics, including multiple routes to chaos, induced by the adaptive coupling used to represent synaptic plasticity.

Synaptic plasticity is the primary mechanism through which the brain learns, stores information, and adapts to new experiences. Extensive experimental evidence demonstrates that the strength of synaptic connections can be dynamically enhanced or diminished in response to experience and neural activity.1–9 Understanding synaptic plasticity is essential for unraveling the neural basis of cognitive functions and developing effective therapeutic interventions for neurological disorders. Both invasive and non-invasive brain stimulation techniques have been shown to induce lasting modifications in neural circuits.10 Among these, transcranial magnetic stimulation (TMS) has demonstrated promising cognitive and behavioral benefits in post-stroke rehabilitation11 and in the treatment of depression.12 However, the precise mechanisms by which TMS elicits these effects and the optimization of its application remain areas of active investigation.

Mathematical models of synaptic plasticity date back to Donald Hebb, who in 1949 proposed that the repeated and persistent co-activation of two or more neurons strengthens their synaptic connections.13 Since then, a plethora of mathematical models have been developed to describe the mechanisms underlying the strengthening and weakening of synaptic connections (see review by Taherkhani et al.14). Of particular relevance for this study are adaptive network models, which serve as valuable proxies for understanding synaptic plasticity in the brain.15 In general, network adaptivity enriches the repertoire of dynamical behaviors exhibited by networks of oscillators, enabling phenomena such as multistability,16 frequency clustering,17 as well as the coexistence of incoherence and synchronization.18 Adaptive neuronal oscillator models typically employ phase-difference-dependent plasticity (PDDP), wherein coupling strengths are updated based on the similarity or dissimilarity of the oscillators’ phases. Duchet et al. recently demonstrated that in the network of Kuramoto oscillators, PDDP rules can be effectively equated with spike-timing-dependent plasticity rules.19 They also showed that a symmetric PDDP rule permits the use of the Ott–Antonsen Ansatz20 to derive a low-dimensional mean-field description of the system. In this study, we extend the work of Duchet et al. to the more biologically realistic θ-neuron model with first order synaptic dynamics.

The θ-neuron model is known to admit an exact mean-field reduction.21 When synaptic dynamics are incorporated, this reduction yields a neural mass model driven by a dynamical variable representing population synchrony.22–24 Neural mass models, also known as firing rate models, are commonly used to study coarse-grained neural activity. Traditionally, these models are phenomenological, offering limited connections to the underlying biological mechanisms. In contrast, this new form of the neural mass model, derived from a population of θ-neurons and dubbed the next generation neural mass model, provides an explicit link between the underlying spiking network and the measurable population-level activity. Similarly, Montbrió et al. showed that the quadratic integrate-and-fire (QIF) model also admits an exact low dimensional description and derived a conformal mapping that links their mean-field model to the next generation neural mass model.25 

As the θ-neuron and QIF models include accurate descriptions of the synaptic interactions between cortical cells, it is possible to include plasticity, and other neural mechanisms, in a biologically plausible manner. Laing added a spatial dependence to the neurons’ activity to derive an exact neural field model,26 and later he extended this work to include and study the effect of gap junction coupling.27 Taher et al. extended the model to include short-term synaptic facilitation and depression, exploring their role in working memory.28 Similarly, Gast et al. incorporated short-term adaptation mechanisms to investigate bursting dynamics.29,30

In Sec. II, we present the mean-field model for a population of θ-neurons incorporating a phase-difference-dependent plasticity (PDDP) rule to represent adaptive coupling strengths within the network. The accuracy of the resulting model is validated in Sec. III through comparisons with corresponding full-network dynamics. A bifurcation analysis of the mean-field model is conducted in Sec. IV, followed by a broader discussion of our findings in Sec. V.

We begin by considering a network of N synaptically coupled θ-neurons. Their dynamics are described by the equations
(1)
(2)
where θ j ( t ) is the phase of neuron j and s j ( t ) represents its synaptic conductance. The parameters τ m and τ s correspond to the membrane and synaptic timescales, respectively, η j is the background drive, v syn is the synaptic reversal potential, k l j denotes the strength of the synaptic connection from neuron l to neuron j, and t l m is the mth spike time of the lth neuron. The θ-neuron model is formally equivalent to the quadratic integrate-and-fire (QIF) model, under the transformation v = tan θ 2. Conductance-based synaptic coupling is incorporated into the model, resulting in the dynamics described by (1). For completeness, the equivalent QIF network formulation is provided in  Appendix A.
We employ the PDDP rule of Seliger et al.31 to update the synaptic coupling strengths based on the phase differences between pairs of neurons,
(3)
where α controls the strength of plasticity relative to the intrinsic decay of synaptic strength and ϵ is the learning rate.
For a network of θ-neurons with uniform coupling strengths, k l j = k , { j , l }, the mean-field dynamics are given as
(4)
(5)
where s ( t ) = 1 N j = 1 N s j ( t ) is the mean synaptic conductance and z ( t ) = 1 N j = 1 N e i θ j ( t ) = R ( t ) e i ϕ ( t ) is the Kuramoto order parameter. The Kuramoto order parameter encapsulates the average phase of the neurons, ϕ ( t ), and the phase coherence, R ( t ), i.e., the level of synchrony. The parameters η 0 and Δ describe the mean and width of the Lorentzian distribution, respectively, of the background drives.
Equations (4) and (5) correspond to the mean-field model derived by Coombes and Byrne23 in the case of a fixed network topology, with first order synaptic coupling rather than second. To derive a mean-field approximation for the network with PDDP, we must construct an equation for the evolution of the mean coupling strength k ^ = 1 N 2 j = 1 N l = 1 N k l j. The governing equation for k ^ is computed as follows:
(6)
Notice how the mean coupling strength is updated based on the magnitude of the Kuramoto order parameter, i.e., the level of synchrony. In other words, the stronger the synchrony among neurons, the greater the coupling strength becomes. This aligns with Hebb’s postulate that neurons that fire together, wire together. Importantly, this model captures higher-order coupling updates, such that increased synchrony strengthens all connections in the network, rather than focusing solely on individual neuron pairs. By adopting this mean-field approximation, we assume that the synaptic coupling strengths follow a unimodal distribution and forgo the possibility of studying clustering behavior.

The mean-field dynamics are given by (4)–(6) with k = k ^. We note that (6) was derived previously by Duchet et al. but coupled to the Kuramoto model. Here, we combine it with the more biologically plausible θ-neuron network with first order synaptic coupling. The increased complexity of the θ-neuron model introduces a richer bifurcation structure and a wider range of dynamical behaviors compared to simpler oscillator models. These dynamics are explored in Sec. IV.

We study the θ-neuron network model (1) and (2), where the pairwise coupling strengths are updated according to (3) (i.e., network with pairwise or “local” updates). We compare these results with the same θ-neuron network where all coupling strengths are now updated using the mean evolution equation (6) (i.e., network with global updates), as well as the mean-field system (4)–(6) with k = k ^. Figure 1 displays results from numerical simulations of a network of 1000 θ-neurons (see  Appendix B for computational details). Each column corresponds to a different value of η 0, showcasing regimes with qualitatively distinct dynamical behaviors. The green, blue, and pink curves show the evolution of the dynamic variables for the network with pairwise updates, the network with global updates, and the mean-field model, respectively. The top row shows the Kuramoto order parameter ( z). The second, third, and fourth rows show the time courses for the mean coupling strength ( k ^), network synchrony ( R = | z |), and mean conductance ( s), respectively. The bottom row shows the coupling weight distributions of the network with pairwise updates.

FIG. 1.

Comparison of the mean-field model with networks of 1000 θ-neurons in different regimes—(a) stable node ( η 0 = 5), (b) stable spiral ( η 0 = 10), (c) limit cycle ( η 0 = 25). Network simulations with pairwise (global) updates are shown in green (blue) and the mean-field dynamics are shown in pink. Other parameter values: Δ = 0.5, v syn = 10, τ m = 1, τ s = 1, α = 2, ϵ = 0.1.

FIG. 1.

Comparison of the mean-field model with networks of 1000 θ-neurons in different regimes—(a) stable node ( η 0 = 5), (b) stable spiral ( η 0 = 10), (c) limit cycle ( η 0 = 25). Network simulations with pairwise (global) updates are shown in green (blue) and the mean-field dynamics are shown in pink. Other parameter values: Δ = 0.5, v syn = 10, τ m = 1, τ s = 1, α = 2, ϵ = 0.1.

Close modal

For fixed point dynamics, the mean-field model is a good approximation for both the network with local updates and the network with global updates, especially when the fixed point lies close to the boundary | z | = 1 [Fig. 1(a)]. When | z | = 1, the neurons are perfectly synchronized and the phase differences are zero. While for | z | 1, the heterogeneity in the phase differences is minimal and the distribution of coupling weights has a single narrow peak around the mean coupling strength of 1.9 [Fig. 1(a)(v)]. A finite size analysis, conducted for N = 500, 1000, 2000, 5000, confirms that the distribution of coupling weights converges to a unimodal distribution, which peaks at 1.9 as N increases (see supplementary material, Fig. S1). This result highlights that the global coupling strength update rule closely approximates the local pairwise coupling strength update rule in this regime. Although the population-level dynamics settle to a fixed point, the neurons are not necessarily quiescent. As shown by Byrne et al.,22 the firing rate is positive for all | z | < 1 and increases as the dynamics approach z = e i π, where firing is maximal as all neurons cross threshold simultaneously (see Fig. 8 of Byrne et al.22). In the regime where | z | 1 and ϕ π the majority of neurons are synchronized at rest, and as such, this fixed point corresponds to a population of predominantly quiescent neurons.

As the fixed point moves away from the | z | = 1 boundary, the firing rate increases, and the population of neurons transitions to an active state. This shift away from | z | = 1 also reduces the level of synchrony, introducing greater heterogeneity in the phases. Consequently, the dynamics of the networks with local and global updates begin to diverge [Fig. 1(b)]. Despite this divergence, networks with global updates still closely match the mean-field dynamics. The coupling weight distribution starts to become bimodal [Fig. 1(b)(v)], which suggests that the oscillators may be forming clusters. The finite size analysis confirms that, as N increases, the distribution converges towards a bimodal distribution with peaks at 0.3 and 1.9 (see supplementary material Fig. S1). Although the mean coupling strength is higher in the network with local updates [Fig. 1(b)(ii)], the Kuramoto order parameter still fluctuates around the mean-field fixed point value. This is particularly apparent in the synchrony time course [Fig. 1(b)(iii)].

For limit cycle dynamics, the mean-field model is a good approximation for the network with global updates only [Fig. 1(c)]. Although the trajectories do not match the network with local updates, they lie in roughly the same region of the Kuramoto order parameter phase plane [Fig. 1(c)(i)]. The mean-field model slightly underestimates the mean coupling strength for the network with global updates and the oscillatory amplitude of the conductance, while slightly overestimating the oscillatory amplitude of synchrony. Despite these discrepancies, the dynamics remain qualitatively similar. As in the previous regime, the coupling weight distribution appears bimodal [Fig. 1(c)(v)] and a finite size analysis confirms that this bimodal distribution persists and stabilizes as N increases (see supplementary material Fig. S1).

We note that although the dynamics for the network with global updates and the mean-field model are qualitatively similar across all three regimes, they are not exactly matched. The primary source of this mismatch is the variability in the background drives η j, which are randomly sampled from a Lorentzian distribution. To illustrate this variability, we present the time course of the mean coupling strength across 15 different realizations of the network simulations (Fig. 2) at three network sizes, namely, N = 500, 1000, and 2000. Zoomed-in versions of these time courses, highlighting the long-term behavior of the system, can be found in the supplementary material (Fig. S2). As the network size increases, the variability across realizations decreases, and the dynamics of the network with global updates converges toward the mean-field dynamics (pink). We see a similar trend for both synchrony R and mean conductance s (see supplementary material, Figs. S3 and S4).

FIG. 2.

Variability in network simulations. Time course of the mean coupling strength for 15 realizations of the θ-neuron network with pairwise (green) and global (blue) updates for 500, 1000, and 2000 neurons. Individual realizations are partially transparent, with the mean across realizations shown as a solid line. Also included are the mean-field dynamics (pink). Parameter values as in Fig. 1.

FIG. 2.

Variability in network simulations. Time course of the mean coupling strength for 15 realizations of the θ-neuron network with pairwise (green) and global (blue) updates for 500, 1000, and 2000 neurons. Individual realizations are partially transparent, with the mean across realizations shown as a solid line. Also included are the mean-field dynamics (pink). Parameter values as in Fig. 1.

Close modal

The variability can be reduced by decreasing the heterogeneity of the individual neurons, i.e., reducing Δ. However, modifying the heterogeneity also changes the qualitative behavior of the system. A more homogeneous system (small Δ) is more likely to synchronize and if operating in the active state (as is the case for η 0 = 10), oscillations will emerge. As reported in Fig. 2, oscillatory states exhibit greater variability across realizations compared to non-oscillatory states. In contrast to the findings of Gast et al., who reported that a small adaptation time rate (analogous to the learning rate ϵ in our model) improved the agreement between the network dynamics and the mean-field model dynamics for a network of quadratic integrate-and-fire neurons with spike-frequency adaptation,30 we found that for our synchrony driven plasticity rule, changing ϵ did not significantly affect the goodness of the fit (see supplementary material, Figs. S5, S6, and S7). These findings support our claim that, for a fixed network size, the goodness of the fit is determined by the type of dynamics (i.e., node, spiral, and limit cycle), rather than specific parameter values.

Interestingly, when comparing the dynamics of the mean-field system to the network with global updates, we observed transitions in the network simulations away from the mean-field dynamics (Fig. 3). This behavior indicates the presence of another stable attractor, which was confirmed through bifurcation analysis (see Sec. IV). The transition observed in Fig. 3 suggests that the finite size fluctuations in the network model can push the system out of the basin of attraction of the limit cycle attractor and into the basin of attraction of the stable fixed point where the network is more strongly coupled (light blue curve). Increasing the network size from 500 to 1000 reduces the finite size fluctuations, making it more likely for the network dynamics to remain within the basin of attraction of the limit cycle attractor (dark blue curve). For smaller networks ( N = 500), transitions to the stable fixed point occurred in approximately 80 % of realizations [Fig. 3(b)], while for larger networks ( N = 1000), transitions only occurred for 30 % of realizations [Fig. 3(c)]. This behavior can be attributed to the long tail of the Lorentzian distribution, which ensures a non-zero probability of extremely high or low values of η j. In larger networks, the impact of these outlier neurons on the overall dynamics is minimal. However, in smaller networks, these neurons have more of an effect. To explore the potential for multistability and to better understand the complex dynamics of the system, we perform a bifurcation analysis in Sec. IV.

FIG. 3.

Bistability. Finite size fluctuations in the network with global updates drives system to alternative stable attractor. (a) Kuramoto order parameter for a single realization of the 500 neuron network (light blue) and 1000 neuron network (dark blue), along with the mean-field dynamics (pink). (b) Mean coupling strength k ^ for 10 realizations of the 500 neuron network with the mean-field dynamics (pink). (c) Mean coupling strength k ^ for 10 realizations of the 1000 neuron network with the mean-field dynamics (pink). Parameter values: η = 5.25, Δ = 0.5, v syn = 10, τ m = 1, τ s = 1, α = 25, ϵ = 0.5.

FIG. 3.

Bistability. Finite size fluctuations in the network with global updates drives system to alternative stable attractor. (a) Kuramoto order parameter for a single realization of the 500 neuron network (light blue) and 1000 neuron network (dark blue), along with the mean-field dynamics (pink). (b) Mean coupling strength k ^ for 10 realizations of the 500 neuron network with the mean-field dynamics (pink). (c) Mean coupling strength k ^ for 10 realizations of the 1000 neuron network with the mean-field dynamics (pink). Parameter values: η = 5.25, Δ = 0.5, v syn = 10, τ m = 1, τ s = 1, α = 25, ϵ = 0.5.

Close modal

We perform a preliminary bifurcation analysis of the mean-field approximation of the network with PDDP (4)–(6) by means of numerical simulation and continuation. The goal of this analysis is to highlight the richness of the dynamics that emerge when the coupling is made adaptive. Specifically, we focus on the dynamics in terms of the mean background drive, η 0, which is the bifurcation parameter commonly used in previous studies of the model without PDDP, as well as the parameter α, which controls the strength of adaptiveness (i.e., how strongly the coupling strength depends on the state of the system). While this analysis is not exhaustive, it serves as a starting point, with further avenues for detailed exploration discussed in Sec. V. For the numerical analysis, we employ the fourth-order Runge–Kutta method to identify stable behaviors, including fixed points, periodic, quasi-periodic, and chaotic attractors, and generally exclude transient dynamics unless otherwise specified. We use the matlab-based continuation package MatCont32 to obtain unstable fixed point solutions and periodic orbits, as well as to calculate their stability properties and identify bifurcations.

To begin, we present a bifurcation diagram of the model without PDDP (i.e., ϵ = 0 and fixed k = 1) to demonstrate the typical behavior of the model with fixed coupling. Figure 4(a) shows a thin curve representing fixed points, in terms of synaptic conductance s, that are either stable (blue) with zero unstable eigendirections or saddle (red) with two unstable eigendirections. The thick blue curve shows the maximum s value of stable periodic orbits emerging from the Hopf bifurcation, represented by the magenta circle. This bifurcation diagram is consistent with previous studies that have shown the system undergoes a supercritical Hopf bifurcation under appropriate parameter conditions.21,23 This relatively simple behavior contrasts with the complex dynamics that arise when adaptive plasticity is incorporated into the model.

FIG. 4.

Bifurcation diagram in η 0 for fixed coupling ( k = 1, ϵ = 0) in panel (a) and adaptive coupling ( α = 30, ϵ = 0.5) in panels (b) and (c). Thin curves represent fixed points with zero (blue), one (green), or two (red) unstable eigendirections. Thick curves represent periodic orbits with zero (blue), one (green), or two (red) unstable Floquet multipliers. Blue, magenta, black, and red circles represent saddle-node, Hopf, period-doubling, and torus bifurcations, respectively. Other parameter values as in Fig. 3.

FIG. 4.

Bifurcation diagram in η 0 for fixed coupling ( k = 1, ϵ = 0) in panel (a) and adaptive coupling ( α = 30, ϵ = 0.5) in panels (b) and (c). Thin curves represent fixed points with zero (blue), one (green), or two (red) unstable eigendirections. Thick curves represent periodic orbits with zero (blue), one (green), or two (red) unstable Floquet multipliers. Blue, magenta, black, and red circles represent saddle-node, Hopf, period-doubling, and torus bifurcations, respectively. Other parameter values as in Fig. 3.

Close modal

Figure 4(b) shows the bifurcation diagram for the same mean-field model, now with adaptive coupling ( α = 30 and ϵ = 0.5). The thin green curve represents fixed points with one unstable eigendirection. The thick green and red curves represent saddle periodic orbits with one and two unstable Floquet multipliers, respectively. Blue, black, and red circles indicate saddle-node, period-doubling, and torus bifurcations. Panel (c) shows the same bifurcation diagram in terms of mean coupling strength k ^.

When the mean background drive is zero ( η 0 = 0), the only attractor is a fixed point representing a strongly coupled network. As η 0 increases, the coupling strength decreases until the branch of equilibria loses stability at a saddle-node bifurcation at η 0 10. A branch of saddle fixed points leads to stable fixed points, representing a weakly coupled network, via another saddle-node bifurcation. The lower branch of stable fixed points is short-lived since a supercritical Hopf bifurcation is encountered, producing a branch of stable periodic orbits. The periodic orbits become unstable twice as η 0 increases: they gain one unstable Floquet multiplier between a pair of period-doubling bifurcations and two unstable Floquet multipliers between a pair of torus bifurcations.

The numerous bifurcations evident in Figs. 4(b) and 4(c) spurred an investigation of the bifurcation set in the ( η 0, α)-plane. We identified α as a suitable second bifurcation parameter, since it relates to the effect of the adaptive coupling, as it represents an upper bound on the coupling strength. Figure 5(a) shows the saddle-node bifurcations of fixed points, observed in Figs. 4(b) and 4(c), that meet at a cusp bifurcation. Inside the two curves exist a region of bistability. For the parameter values considered here, the upper branch of fixed points remains stable, while the lower branch undergoes numerous bifurcations. Figure 5(b) shows the bifurcations that emerge from the lower branch in Figs. 4(b) and 4(c) continued in the ( η 0 , α)-plane. In other words, the Hopf (magenta), period-doubling (black) and torus (red) bifurcation points in Figs. 4(b) and 4(c) now correspond to curves in Fig. 5(b). The dashed gray line indicates the value of α used in Fig. 4(c). In addition, we have blue curves representing saddle-node bifurcations of periodic orbits, additional period-doubling curves in black, and shaded regions of the plane displaying maximal Lyapunov exponents (see  Appendix C for computation details). For clarity, we only show the positive maximal Lyapunov exponents.

FIG. 5.

Bifurcation set in the ( η 0 , α )-plane involving fixed points only (a) and periodic orbits (b). Panel (a) shows blue curves of saddle-node bifurcations of fixed points. Panel (b) shows curves of Hopf bifurcations (magenta), period-doubling bifurcations (black), torus bifurcations (red), and saddle-node bifurcations of periodic orbits (blue). The labels on the torus bifurcation curve show the root points of the corresponding p : q Arnold tongues. The shaded regions indicate positive maximal Lyapunov exponents. The dashed gray line indicates the value of α used for Figs. 4 and 6. Other parameter values as in Fig. 3.

FIG. 5.

Bifurcation set in the ( η 0 , α )-plane involving fixed points only (a) and periodic orbits (b). Panel (a) shows blue curves of saddle-node bifurcations of fixed points. Panel (b) shows curves of Hopf bifurcations (magenta), period-doubling bifurcations (black), torus bifurcations (red), and saddle-node bifurcations of periodic orbits (blue). The labels on the torus bifurcation curve show the root points of the corresponding p : q Arnold tongues. The shaded regions indicate positive maximal Lyapunov exponents. The dashed gray line indicates the value of α used for Figs. 4 and 6. Other parameter values as in Fig. 3.

Close modal

Figure 5(b) provides valuable insights into the behavior of the system. First, we see that the pair of period-doubling bifurcations shown in Fig. 4(c) belong to the same bifurcation curve in Fig. 5(b). We show further period-doubling bifurcation curves to demonstrate the period-doubling cascade that gives rise to a region of chaotic behavior. We find that solutions within this region possess positive maximal Lyapunov exponents, providing evidence that these solutions are indeed chaotic. Note that the maximal Lyapunov exponents are calculated by incrementally increasing α for fixed values of η 0 so it is plausible that some positive maximal Lyapunov exponents are overlooked due to multistability.

A more detailed bifurcation diagram in η 0 for fixed α = 30, which includes the period-doubling route to chaos, is shown in Fig. 6. The solutions with a positive maximal Lyapunov exponent are highlighted yellow, revealing the branch of chaotic attractors between the sets of period-doubling cascades.

FIG. 6.

Period-doubling cascade in η 0 for α = 30 in terms of max( k ^). Thin curves represent fixed points with zero (blue), one (green), or two (red) unstable eigendirections, whereby the magenta circle indicates a Hopf bifurcation. Thick curves represent periodic orbits with zero (blue) or one (green) unstable Floquet multipliers, whereby black circles indicate period-doubling bifurcations. Chaotic solutions are shaded yellow. Other parameter values as in Fig. 3.

FIG. 6.

Period-doubling cascade in η 0 for α = 30 in terms of max( k ^). Thin curves represent fixed points with zero (blue), one (green), or two (red) unstable eigendirections, whereby the magenta circle indicates a Hopf bifurcation. Thick curves represent periodic orbits with zero (blue) or one (green) unstable Floquet multipliers, whereby black circles indicate period-doubling bifurcations. Chaotic solutions are shaded yellow. Other parameter values as in Fig. 3.

Close modal

Returning to Fig. 5(b), as η 0 is increased further, the periodic orbit born at the Hopf bifurcation loses stability at a torus bifurcation. Our simulations reveal that small invariant tori emerge beyond the torus bifurcation curve, where the periodic orbit is unstable. These tori persist in long simulation runs, suggesting that the torus bifurcations are supercritical (for the range of parameter values considered here). Aside from quasiperiodic solutions on the invariant tori, they will also have associated p : q Arnold tongues, as labeled in Fig. 5(b). Generally, where the underlying torus is stable, the Arnold tongue will contain a pair of stable and saddle p : q periodic orbits. Boundaries of the tongue are given by saddle-node bifurcations where these two periodic orbits meet and disappear. Here, we show three example Arnold tongues. In accordance with theory,33,34 the 1 : 5 Arnold tongue is rooted at the point on the torus bifurcation curve where its rotation number (or winding number) becomes 1 / 5, indicated by the red circle. This means that two curves of saddle-node bifurcations, which give the boundaries of the Arnold tongue, approach the torus bifurcation curve from the same side and meet (see the inset in Fig. 7 for a clearer depiction). In theory, there exist an infinite number of Arnold tongues, each rooted at an associated point along the torus bifurcation curve with a rational rotation number. The boundaries of the 1 : 3 and 1 : 4 Arnold tongues, on the other hand, do not appear to be rooted at their associated resonance points along the torus bifurcation curve. This suggests that these Arnold tongues possess a more complicated geometry, as discussed in Sec. V.

FIG. 7.

Zoom-in on the 1 : 5 Arnold tongue in Fig. 5 with (black) period-doubling bifurcation curves. The inset shows a further zoom-in on the root point of the 1 : 5 Arnold tongue. The black cross indicates the parameter values used in Fig. 8. Other parameter values as in Fig. 3.

FIG. 7.

Zoom-in on the 1 : 5 Arnold tongue in Fig. 5 with (black) period-doubling bifurcation curves. The inset shows a further zoom-in on the root point of the 1 : 5 Arnold tongue. The black cross indicates the parameter values used in Fig. 8. Other parameter values as in Fig. 3.

Close modal

The maximal Lyapunov exponents in Fig. 5(b) also indicate regions of chaotic behavior due to dynamics on tori for larger values of η 0. The emergence of chaotic regimes, resulting from the overlapping of Arnold tongues, has been observed in many systems throughout the literature; for example Refs. 35, 36, and 37. The chaotic attractor may appear and disappear via multiple routes. For example, in Fig. 7, we show three period-doubling curves that exist within the 1 : 5 Arnold tongue, representing a cascade of period-doubling bifurcations that leads to chaotic behavior. In some cases, the disappearance of chaotic attractors appears to coincide with saddle-node bifurcations of periodic orbits. In such cases, our simulations reveal very long chaotic transients. This suggests that chaotic attractors may disappear at boundary crises, where a chaotic attractor meets the stable manifold of a saddle invariant set.38 To provide further evidence of this, Fig. 8 displays the results of 3500 simulations for parameter values near where the chaotic attractor disappears ( η 0 = 8.94 and α = 30; black cross in Fig. 7) using randomly chosen initial conditions. They are chosen from the former basin of attraction of the chaotic attractor. The black circles on the plot show the fraction of the 3500 simulations that remain on the ghost of the chaotic attractor as a function of time (see  Appendix D for computation details). The horizontal axis is scaled by the average chaotic transient time τ ¯ = 1920 and the vertical axis has a natural logarithmic scaling. We see that all trajectories eventually leave the ghost of the chaotic attractor. They all settle to the stable 1 : 5 periodic orbit. The black circles are in good agreement with the gray curve, which indicates the theoretical (exponential) probability distribution of chaotic transient lengths in the vicinity of a boundary crisis.38 

FIG. 8.

The natural logarithm of the fraction of transients remaining on the ghost of the chaotic attractor as a function of time t from numerical simulation (black circles) and theory (gray curve). 3500 transients are calculated for η 0 = 8.94 and α = 30 using random initial conditions within the former basin of attraction of the chaotic attractor. The average time spent on the ghost of the chaotic attractor is τ ¯ = 1920. Other parameter values as in Fig. 3.

FIG. 8.

The natural logarithm of the fraction of transients remaining on the ghost of the chaotic attractor as a function of time t from numerical simulation (black circles) and theory (gray curve). 3500 transients are calculated for η 0 = 8.94 and α = 30 using random initial conditions within the former basin of attraction of the chaotic attractor. The average time spent on the ghost of the chaotic attractor is τ ¯ = 1920. Other parameter values as in Fig. 3.

Close modal

In this work, we derive a low-dimensional description for a network of θ-neurons with first order synapses and adaptive coupling. We compare the dynamics of the reduced system to the θ-neuron network with both local pairwise coupling updates and with global population-wide coupling updates. Our results show that the reduced system accurately describes the evolution of the synchrony and the mean coupling strength for the global updates. For the local update rule, the reduced system only reproduces the network dynamics when the neurons do not cluster, and therefore the coupling weights form a unimodal distribution. The accuracy of the mean-field model is further influenced by the narrowness of this distribution, due to the k = k ^ assumption.

We also present a preliminary bifurcation analysis of the mean-field model to highlight the complex dynamics that arise once the coupling becomes adaptive. Figure 5(b) clearly shows that the more interesting behavior, compared to the fixed coupling case, only occurs when α is sufficiently large. For smaller values of α, only a Hopf bifurcation is encountered as η 0 is varied, such that the behavior is similar to the fixed coupling case. For larger values of α, the periodic orbit created at the Hopf bifurcation undergoes further bifurcations, leading to two distinct regimes of chaotic behavior. One is due to a period-doubling cascade. The other is due to overlapping Arnold tongues created along the torus bifurcation curve.

Phase-difference-dependent plasticity (PDDP) rules are commonly used in adaptive network models as a proxy for spike-time-dependent plasticity (STDP).16,31,39,40 Duchet et al. studied the validity of this approximation, constructing PDDP rules to fit both symmetric and asymmetric STDP rules, showing that the resulting dynamics and coupling weights closely match.19 In making the mean-field reduction, any notion of spike times is lost, and we are forced to rely on synchrony as a measure of how similar or dissimilar the activities are. For single-cluster states, where neurons exhibit similar activities, the reduced model accurately describes networks with STDP-like plasticity. If the plasticity induces clustering, population-level synchrony is no longer an accurate surrogate for spike-time difference or phase difference. However, the framework presented here is readily extendable to study multi-cluster states via the introduction of multiple populations and/or higher-order Kuramoto–Daido order parameters.

Although STDP has dominated the synaptic plasticity literature, particularly in the field of mathematical and computational neuroscience, it is not the only mechanism for activity dependent changes in synaptic coupling strengths.41 Synaptic scaling, a global plasticity mechanism, strengthens and weakens all synapses by the same factor. This homeostatic plasticity mechanism acts to stabilize and regulate the network activity, to ensure normal propagation of signals through the network.42,43 Our global update rule, which takes the form of a synaptic scaling law, regulates the network activity based on the population synchrony. In its current form, the plasticity rule promotes synchrony, increasing the coupling strength when the neurons are synchronized, but one could easily promote asynchrony or partially synchronized states by adding a phase shift to the original PDDP rule. Previous studies have examined global update rules based on synchrony in Kuramoto oscillator networks,18,44 but the coupling update in these studies is linear in synchrony, whereas our model has a quadratic dependence. The novelty of our approach lies in deriving this global update rule from a pairwise rule for adaptive oscillators.15,16,31 It is not clear what form of pairwise rule would provide a linear dependence on synchrony. Comparing the network dynamics for the linear update rule with the dynamics for our quadratic update rule would be an interesting follow-up study.

We note that the PDDP rule considered here pays no recourse to the actual firing time and instead updates the coupling strengths based on the phase difference at all times. Duchet et al. showed that this simple version of the PDDP rule provided a reasonable approximation of the network and coupling weight dynamics for a network of Kuramoto oscillators with STDP.19 The authors also introduced an event-based PDDP rule whereby the coupling weights are only updated at the spiking phases. Since no distinctly different dynamics were observed with the event-based rule, we opted for the simpler version here, though the framework could easily be extended to incorporate the event-based rule. Another interesting avenue for future work is the plasticity rule proposed by Bergoin et al.45 The authors proposed a 2 π-periodic exponential rule with asymmetric potentiation and depression phase intervals. Using the framework of Duchet et al.,19 one could compute the Fourier expansion of this rule and construct an approximate mean-field description for the network dynamics.

Our bifurcation analysis reveals similarities to the dynamics observed by So and Barreto46 in a model of bimodal Kuramoto oscillators with periodic coupling in the thermodynamic limit. In contrast to the PDDP rule considered here, the time-dependent coupling strength considered in Ref. 46 was sinusoidal with an enforced timescale. In other words, the model was non-autonomous with periodic parametric forcing. For certain timescales of the coupling, the authors identified complicated dynamics, including period-doubling routes to chaos and attractor crises, which have also been found here.

There are several interesting avenues for future work on the analysis of the mean-field model considered here. First, our bifurcation analysis focuses on two parameters: namely, the intrinsic excitabilities, η 0, and the maximum coupling strength, α, which also relates coupling strength to the state of the system. This choice of parameters is made to directly compare the behavior that emerges when network plasticity is included with the behavior in the non-adaptive model, which has been studied in the past.21,23 However, the emergence of complex dynamics in the adaptive system may also depend on other parameters, such as the timescale of plasticity, ϵ, and Δ, which encodes the heterogeneity of the intrinsic excitabilities of the neurons in the network. Previous work has highlighted the key role that Δ plays in controlling system dynamics (e.g., Ref. 25). In fact, in bifurcation diagrams similar to Fig. 4, we find evidence of additional Hopf bifurcations due to Bogdanov–Takens bifurcations, when Δ is varied. However, at this point, we leave a fully comprehensive bifurcation analysis of the model, which could also benefit from nondimensionalization, for a thorough treatment in the future, since this would be a significant body of work.

The preliminary bifurcation analysis above already hints at intriguing details to be studied in the future. It remains unclear how the 1 : 3 and 1 : 4 Arnold tongues shown in Fig. 5(b) emerge from the torus bifurcation curve. We detect curves of torus bifurcations and neutral saddles within these Arnold tongues that could belong to so-called Chenciner bubbles, suggesting that the Arnold tongues attach to the torus bifurcation curve via folds of tori that connect regimes of stable and saddle tori. To show this clearly would require a careful and intricate numerical analysis, as demonstrated by Keane and Krauskopf.47 

In Sec. IV, we provide evidence of a boundary crisis route to chaos (Fig. 8). To clearly visualize such crises in this system would require the calculation of higher-dimensional saddle manifolds using advanced numerical techniques; for example, the work of Hasan et al.48 and Dankowicz et al.49 A deeper appreciation of the role of saddle manifolds in this system could also be beneficial for understanding the limitations of the mean-field model. In particular, if an attractor of the mean-field model is near the boundary of its basin of attraction, then this mean-field solution may not correspond to a solution in the full-network system, since the variability around its mean state may force the system into a different basin of attraction. For example, in Fig. 3, we observe that the simulation of the mean-field model follows a periodic solution, while the full-network simulation appears to approximately follow the periodic solution for some time before jumping to a different attractor with a larger coupling strength. Figure 9 shows the same phase space projection as Fig. 3(a), containing the relevant invariant sets of the mean-field model. The thick blue curve represents the stable periodic orbit, the blue/green circle represents the stable/saddle fixed point. The Jacobian of the model evaluated at the saddle has the following eigenvalues: ( 3.016 , 0.5240 ± 2.224 i , 0.5915 ). As such, the saddle has a one-dimensional unstable manifold and a three-dimensional stable manifold. In Fig. 9, the thin red curve represents the unstable manifold of the saddle fixed point, found by numerically integrating the model forward in time from the saddle. The thin blue curve is an approximation of the strong stable manifold, found by numerically integrating the model backward in time from the saddle. Since the boundary between the basins of attraction is given by the stable manifold of the saddle, Fig. 9 paints a convincing picture that the jump observed in Fig. 3 is due to the natural variability of the full-network model pushing the trajectory from the periodic orbit beyond the stable manifold of the saddle and, therefore, into the basin of attraction of the stable fixed point. However, the picture in Fig. 9 is somewhat misleading since the stable manifold of the saddle is three-dimensional, and it only shows the one-dimensional strong stable manifold. Also, it cannot be ruled out that the close vicinity of the periodic orbit to the strong stable manifold of the saddle in Fig. 9 is due to a fortunate projection from four dimensions onto two dimensions. Hence, providing a clear and precise picture of how trajectories interact with saddle manifolds would require their calculation in higher dimensions using more advanced techniques.

FIG. 9.

z-plane of the mean-field model for parameter values used in Fig. 3. Blue/green circles represent stable/saddle fixed points, while the thick blue curve represents a periodic orbit. The thin red and blue curves are approximations of the unstable and strong stable manifolds of the saddle, respectively.

FIG. 9.

z-plane of the mean-field model for parameter values used in Fig. 3. Blue/green circles represent stable/saddle fixed points, while the thick blue curve represents a periodic orbit. The thin red and blue curves are approximations of the unstable and strong stable manifolds of the saddle, respectively.

Close modal

Building on the phase space analysis of mean-field models, Klinshov et al. showed that for a non-adaptive network of quadratic integrate-and-fire neurons, the finite size fluctuations can be included in the mean-field description as an additional shot noise term.50,51 They observed noise-induced switching between attractors in the resulting stochastic model, similar to the transitions observed in Fig. 3. Their framework for calculating state lifetimes could be useful for understanding attractor transitions in our model.

Finally, it could be worthwhile to consider smaller values of ϵ, representing a slower adaptive coupling rate. For sufficiently small ϵ, a multiple time-scale approach using geometric singular perturbation theory52,53 could provide a new perspective on the mean-field model, as demonstrated by Ciszak et al.44 for Kuramoto models. It is likely that such an analysis could provide another perspective on the behavior of our mean-field model.

See the supplementary material for additional figures showing the distribution of coupling weights as a function of the network size (Fig. S1), a zoomed-in version of Fig. 2 (Fig. S2), zoomed-in time courses for the network synchrony (Fig. S3) and conductance (Fig. S4) for different network sizes, and the dynamics of the mean coupling strength (Figs. S5 and S7) and the Kuramoto order parameter (Fig. S6) for various learning rates ϵ and levels of heterogeneity Δ.

We thank Cris Hasan for helpful discussions about saddle manifolds in R 4. We also thank Andrew Flynn for advice on computing Lyapunov exponents and Ruarai Goodfellow for assistance in spotting typos in the cumbersome Jacobian matrix of the mean-field model. We are grateful for the feedback from two anonymous reviewers that improved the manuscript. This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant No. 18/CRT/6049. For the purpose of Open Access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. R.L. acknowledges support from the EPSRC Grant Nos. EP/V013068/1, EP/V03474X/1, and EP/Y028872/1.

The authors have no conflicts to disclose.

N. Fennelly and A. Neff are joint first authors.

N. Fennelly: Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). A. Neff: Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). R. Lambiotte: Project administration (equal); Supervision (equal); Writing – review & editing (equal). A. Keane: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Supervision (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Á. Byrne: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Methodology (equal); Project administration (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are available within the article and its supplementary material.

We consider a network of quadratic integrate-and-fire (QIF) neurons,
(A1)
Using the transformation v j = tan θ j / 2, we derive the equivalent phase dynamics (1).

We simulate two networks of θ-neuron oscillators (1) and (2): one with pairwise adaptive coupling (3) and the other with global adaptive coupling (6). To address the discontinuities introduced by the Dirac delta at each spike, we employ the Euler method with a fixed time step of d t = 0.0001 for these simulations. Numerical stability tests confirm that this step size is sufficiently small (solutions are unchanged for smaller step size). Our primary simulations focus on populations of N = 1000 neurons, while additional simulations with N = 500, N = 2000, and N = 5000 are conducted to assess the validity of the mean-field reduction. Each neuron’s background drive η j is drawn randomly from a Lorentzian distribution with mean η 0 and half-width at full maximum Δ; simulations are made reproducible by setting random seeds for each run. As the mean-field dynamics are continuous, they are computed using the adaptive step-size solver solve_ivp from SciPy’s integrate package. We use the fourth-order Runge–Kutta method with relative tolerance 10 6.

In this section, we provide a brief overview of how Lyapunov exponents are computed for the purpose of this paper; for details, we refer to Refs. 54 and 55. Lyapunov exponents are a very useful tool from dynamical systems theory. They measure the exponential convergence or divergence of infinitesimal perturbations. More generally, one may consider the expansion or contraction rate of an n-dimensional volume and compute n orthogonal Lyapunov exponents, referred to as a Lyapunov spectrum. In this paper, we only consider computing the maximal (largest) Lyapunov exponent. The maximal Lyapunov exponent, λ m, may imply that an attractor is a chaotic attractor ( λ m > 0), a stable limit cycle or at a bifurcation point ( λ m = 0), or a stable fixed point ( λ m < 0).

Considering a time-continuous dynamical system,
(C1)
we use the following procedure to compute λ m. First, we numerically integrate the system for long enough to ensure that transients have died away and the system has settled to an attractor A.
If we denote a trajectory on attractor A as x A ( t ), then we can consider a perturbation vector p ( t ), beginning close to A, according to
(C2)
Combining Eqs. (C1) and (C2),
where J ( x A ( t ) ) is the Jacobian matrix of the dynamical system evaluated at x A ( t ), which provides us with the linearized time evolution of the perturbation vector
(C3)
Assuming a small initial perturbation, p ( 0 ) = p 0, the systems described by Eqs. (C1) and (C3) are numerically integrated in parallel. This is necessary because we need the solution x A ( t ) to evaluate the Jacobian matrix in Eq. (C3). We apply the initial perturbation, p 0, and integrate Eq. (C3) for a short time interval, t s, then calculate
Only a short time interval is considered for this calculation because p ( t ) will soon become very small or very large depending on whether it experiences exponential decay or growth. Before integrating further, we normalize the vector p ( t s ), preserving the direction of convergence towards or divergence from A. We iteratively calculate l n when t = n t s and normalize p again. Then, the maximal Lyapunov exponent is approximately
which will converge for a sufficiently large choice of N. For the calculations presented in this paper, we use the fourth-order Runge–Kutta method with a time step of 0.001, written in C++, and use the values t s = 5 and N = 1000.

In Fig. 8, we demonstrate the exponential distribution of durations of transients near the disappearance of a chaotic attractor in order to provide evidence for a boundary crisis.38 In this case, we consider the behavior of the mean-field model for parameter values where there is no chaotic attractor (i.e., η 0 = 8.94 and α = 30, shown by the black cross in Fig. 7), while a chaotic attractor does exist for a slightly larger value of η 0 (e.g., η 0 = 8.96, indicated by the positive maximal Lyapunov exponents in Fig. 7). If we let D C × R 2 denote the basin of attraction of the chaotic attractor when η 0 = 8.96, then we expect that trajectories for η 0 = 8.94 with random initial conditions within D will exhibit very long chaotic transients.38 In other words, the trajectories remain on the “ghost” or “remnant” of the chaotic attractor for a very long time before finally converging to an attractor.

The duration of a transient could be approximated by pre-defining a region in phase space that encloses the ghost of a chaotic attractor and then determining how long it takes for the system to leave this region. Here, we take a slightly different approach. In this case, we know that the transient is over once the system approaches a stable 1 : 5 periodic orbit. Therefore, we approximate the duration of the transient by determining how long it takes for the system to approach the stable 1 : 5 periodic orbit.

This is done by numerically integrating the mean-field model with a random initial condition within D and recording all the local maxima of the transients. We identify maxima based on the Euclidean norm of the state of the system x = ( z , s , k ^ ). We say that the trajectory has settled on the stable 1 : 5 periodic orbit once every fifth local maximum is (sufficiently close to being) identical.

To illustrate this procedure, Fig. 10 shows the local maxima taken from an example trajectory used to construct Fig. 8. Figure 10(a) shows each local maximum, denoted with a label i (i.e., x 1 represents the first observed local maximum, x 2 represents the second, etc.). Panel (b) shows the difference between every fifth local maximum. The transient is considered over once the values shown in panel (b) converge to zero with a tolerance of 0.001.

FIG. 10.

Panel (a) shows the Euclidean norm of local maxima from an example trajectory from the simulation ensemble represented in Fig. 8. The vector x contains the system variables ( z , s , k ^ ) and i represents the label of each consecutive local maximum. Panel (b) shows the difference between every fifth local maxima. Parameter values as in Fig. 8.

FIG. 10.

Panel (a) shows the Euclidean norm of local maxima from an example trajectory from the simulation ensemble represented in Fig. 8. The vector x contains the system variables ( z , s , k ^ ) and i represents the label of each consecutive local maximum. Panel (b) shows the difference between every fifth local maxima. Parameter values as in Fig. 8.

Close modal
1.
G. Q.
Bi
and
M. M.
Poo
, “
Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type
,”
J. Neurosci.
18
(
24
),
10464
10472
(
1998
).
2.
H.
Markram
,
J.
Lübke
,
M.
Frotscher
, and
B.
Sakmann
, “
Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs
,”
Science
275
(
5297
),
213
215
(
1997
).
3.
D. E.
Feldman
, “
Timing-based LTP and LTD at vertical inputs to layer II/III pyramidal cells in rat barrel cortex
,”
Neuron
27
(
1
),
45
56
(
2000
).
4.
R. C.
Froemke
and
Y.
Dan
, “
Spike-timing-dependent synaptic modification induced by natural spike trains
,”
Nature
416
(
6879
),
433
438
(
2002
).
5.
S.
Cassenaer
and
G.
Laurent
, “
Hebbian STDP in mushroom bodies facilitates the synchronous flow of olfactory information in locusts
,”
Nature
448
(
7154
),
709
713
(
2007
).
6.
M.
Sgritta
,
F.
Locatelli
,
T.
Soda
,
F.
Prestori
, and
E. U.
D’Angelo
, “
Hebbian spike-timing dependent plasticity at the cerebellar input stage
,”
J. Neurosci.
37
(
11
),
2809
2823
(
2017
).
7.
L. F.
Abbott
and
S. B.
Nelson
, “
Synaptic plasticity: Taming the beast
,”
Nat. Neurosci.
3
(
11s
),
1178
1183
(
2000
).
8.
R. K.
Mishra
,
S.
Kim
,
S. J.
Guzman
, and
P.
Jonas
, “
Symmetric spike timing-dependent plasticity at CA3-CA3 synapses optimizes storage and recall in autoassociative networks
,”
Nat. Commun.
7
,
11552
(
2016
).
9.
G. G.
Turrigiano
and
S. B.
Nelson
, “
Hebb and homeostasis in neuronal plasticity
,”
Curr. Opin. Neurobiol.
10
(
3
),
358
364
(
2000
).
10.
J.
Kricheldorff
,
K.
Göke
,
M.
Kiebs
,
F. H.
Kasten
,
C. S.
Herrmann
,
K.
Witt
, and
R.
Hurlemann
, “
Evidence of neuroplastic changes after transcranial magnetic, electric, and deep brain stimulation
,”
Brain Sci.
12
(
7
),
929
(
2022
).
11.
M.
Starosta
,
N.
Cichon
,
J.
Saluk-Bijak
, and
E.
Miller
, “
Benefits from repetitive transcranial magnetic stimulation in post-stroke rehabilitation
,”
J. Clin. Med.
11
(
8
),
2149
(
2022
).
12.
P. B.
Fitzgerald
, “
Targeting repetitive transcranial magnetic stimulation in depression: Do we really know what we are stimulating and how best to do it
?”
Brain Stimulation
14
(
3
),
730
736
(
2021
).
13.
D. O.
Hebb
,
The Organization of Behavior; A Neuropsychological Theory
(
Wiley
,
1949
).
14.
A.
Taherkhani
,
A.
Belatreche
,
Y. L. G.
Cosma
,
L. P.
Maguire
, and
T. M.
McGinnity
, “
A review of learning in biologically plausible spiking neural networks
,”
Neural Netw.
122
,
253
272
(
2020
).
15.
R.
Berner
,
T.
Gross
,
C.
Kuehn
,
J.
Kurths
, and
S.
Yanchuk
, “
Adaptive dynamical networks
,”
Phys. Rep.
1031
,
1
59
(
2023
).
16.
T.
Aoki
and
T.
Aoyagi
, “
Co-evolution of phases and connection strengths in a network of phase oscillators
,”
Phys. Rev. Lett.
102
(
3
),
034101
(
2009
).
17.
T.
Aoki
, “
Self-organization of a recurrent network under ongoing synaptic plasticity
,”
Neural Networks
62
,
11
19
(
2015
).
18.
P. S.
Skardal
,
D.
Taylor
, and
J. G.
Restrepo
, “
Complex macroscopic behavior in systems of phase oscillators with adaptive coupling
,”
Phys. D
267
,
27
35
(
2014
).
19.
B.
Duchet
,
C.
Bick
, and
Á.
Byrne
, “
Mean-field approximations with adaptive coupling for networks with spike-timing-dependent plasticity
,”
Neural Comput.
35
(
9
),
1481
1528
(
2023
).
20.
E.
Ott
and
T. M.
Antonsen
, “
Low dimensional behavior of large systems of globally coupled oscillators
,”
Chaos
18
(
3
),
37113
(
2008
).
21.
T. B.
Luke
,
E.
Barreto
, and
P.
So
, “
Complete classification of the macroscopic behavior of a heterogeneous network of theta neurons
,”
Neural Comput.
25
(
3207
3234
(
2013
).
22.
Á.
Byrne
,
M. J.
Brookes
, and
S.
Coombes
, “
A mean field model for movement induced changes in the beta rhythm
,”
J. Comput. Neurosci.
43
(
2
),
143
158
(
2017
).
23.
S.
Coombes
and
Á.
Byrne
,
Next Generation Neural Mass Models
(
Springer International Publishing
,
Cham
,
2019
), pp.
1
16
.
24.
Á.
Byrne
,
R. D.
O’Dea
,
M.
Forrester
,
J.
Ross
, and
S.
Coombes
, “
Next-generation neural mass and field modeling
,”
J. Neurophysiol.
123
(
2
),
726
742
(
2020
).
25.
E.
Montbrió
,
D.
Pazó
, and
A.
Roxin
, “
Macroscopic description for networks of spiking neurons
,”
Phys. Rev. X
5
,
021028
(
2015
).
26.
C. R.
Laing
, “
Derivation of a neural field model from a network of theta neurons
,”
Phys. Rev. E
90
,
010901
(
2014
).
27.
C. R.
Laing
, “
Exact neural fields incorporating gap junctions
,”
SIAM J. Appl. Dyn. Syst.
14
(
4
),
1899
1929
(
2015
).
28.
H.
Taher
,
A.
Torcini
, and
S.
Olmi
, “
Exact neural mass model for synaptic-based working memory
,”
PLoS Comput. Biol.
16
(
12
),
e1008533
(
2020
).
29.
R.
Gast
,
H.
Schmidt
, and
T. R.
Knösche
, “
A mean-field description of bursting dynamics in spiking neural networks with short-term adaptation
,”
Neural Comput.
32
(
9
),
1615
1634
(
2020
).
30.
R.
Gast
,
T. R.
Knösche
, and
H.
Schmidt
, “
Mean-field approximations of networks of spiking neurons with short-term synaptic plasticity
,”
Phys. Rev. E
104
,
44310
(
2021
).
31.
P.
Seliger
,
S. C.
Young
, and
L. S.
Tsimring
, “
Plasticity and learning in a network of coupled phase oscillators
,”
Phys. Rev. E
65
(
4
),
7
(
2002
).
32.
A.
Dhooge
,
W.
Govaerts
,
Y. A.
Kuznetsov
,
H.
Gaétan Ellart Meijer
, and
B.
Sautois
, “
New features of the software matcont for bifurcation analysis of dynamical systems
,”
Math. Comput. Model. Dyn. Syst.
14
(
2
),
147
175
(
2008
).
33.
V. I.
Arnold
,
Geometrical Methods in the Theory of Ordinary Differential Equations
(
Springer Science & Business Media
,
2012
), Vol. 250.
34.
Y. A.
Kuznetsov
,
I. A.
Kuznetsov
, and
Y.
Kuznetsov
,
Elements of Applied Bifurcation Theory
(
Springer
,
1998
), Vol. 112.
35.
A.
Cumming
and
P. S.
Linsay
, “
Deviations from universality in the transition from quasiperiodicity to chaos
,”
Phys. Rev. Lett.
59
(
15
),
1633
(
1987
).
36.
A.
Keane
,
B.
Krauskopf
, and
C.
Postlethwaite
, “
Investigating irregular behavior in a model for the El Niño Southern Oscillation with positive and negative delayed feedback
,”
SIAM J. Appl. Dyn. Syst.
15
(
3
),
1656
1689
(
2016
).
37.
M. L.
Heltberg
,
S.
Krishna
,
L. P.
Kadanoff
, and
M. H.
Jensen
, “
A tale of two rhythms: Locked clocks and chaos in biology
,”
Cell Syst.
12
(
4
),
291
303
(
2021
).
38.
C.
Grebogi
,
E.
Ott
, and
J. A.
Yorke
, “
Crises, sudden changes in chaotic attractors, and transient chaos
,”
Physica D
7
(
1–3
),
181
200
(
1983
).
39.
L.
Lücken
,
O. V.
Popovych
,
P. A.
Tass
, and
S.
Yanchuk
, “
Noise-enhanced coupling between two oscillators with long-term plasticity
,”
Phys. Rev. E
93
(
3
),
032210
(
2016
).
40.
R.
Berner
,
E.
Schöll
, and
S.
Yanchuk
, “
Multiclusters in networks of adaptively coupled phase oscillators
,”
SIAM J. Appl. Dyn. Syst.
18
(
4
),
2227
2266
(
2019
).
41.
A.
Citri
and
R. C.
Malenka
, “
Synaptic plasticity: Multiple forms, functions, and mechanisms
,”
Neuropsychopharmacology
33
(
1
),
18
41
(
2008
).
42.
G.
Turrigiano
, “
Homeostatic synaptic plasticity: Local and global mechanisms for stabilizing neuronal function
,”
Cold Spring Harb. Perspect. Biol.
4
,
a005736
(
2012
).
43.
S.
Berberich
,
J.
Pohle
,
M.
Pollard
,
J.
Barroso-Flores
, and
G.
Köhr
, “
Interplay between global and pathway-specific synaptic plasticity in CA1 pyramidal cells
,”
Sci. Rep.
7
(
1
),
17040
(
2017
).
44.
M.
Ciszak
,
F.
Marino
,
A.
Torcini
, and
S.
Olmi
, “
Emergent excitability in populations of nonexcitable units
,”
Phys. Rev. E
102
,
050201
(
2020
).
45.
R.
Bergoin
,
A.
Torcini
,
G.
Deco
,
M.
Quoy
, and
G.
Zamora-López
, “
Inhibitory neurons control the consolidation of neural assemblies via adaptation to selective stimuli
,”
Sci. Rep.
13
(
1
),
6949
(
2023
).
46.
P.
So
and
E.
Barreto
, “
Generating macroscopic chaos in a network of globally coupled phase oscillators
,”
Chaos
21
,
033127
(
2011
).
47.
A.
Keane
and
B.
Krauskopf
, “
Chenciner bubbles and torus break-up in a periodically forced delay differential equation
,”
Nonlinearity
31
(
6
),
R165
(
2018
).
48.
C. R.
Hasan
,
B.
Krauskopf
, and
H. M.
Osinga
, “
Saddle slow manifolds and canard orbits in R 4 and application to the full Hodgkin–Huxley model
,”
J. Math. Neurosci.
8
,
1
48
(
2018
).
49.
H.
Dankowicz
,
Y.
Wang
,
F.
Schilder
, and
M. E.
Henderson
, “
Multidimensional manifold continuation for adaptive boundary-value problems
,”
J. Comput. Nonlinear Dyn.
15
(
5
),
051002
(
2020
).
50.
V. V.
Klinshov
and
S. Y.
Kirillov
, “
Shot noise in next-generation neural mass models for finite-size networks
,”
Phys. Rev. E
106
,
L062302
(
2022
).
51.
V. V.
Klinshov
,
P. S.
Smelov
, and
S. Y.
Kirillov
, “
Constructive role of shot noise in the collective dynamics of neural networks
,”
Chaos
33
(
6
),
061101
(
2023
).
52.
N.
Fenichel
, “
Geometric singular perturbation theory for ordinary differential equations
,”
J. Differ. Equ.
31
(
1
),
53
98
(
1979
).
53.
C. K. R. T.
Jones
,
Geometric Singular Perturbation Theory
(
Springer
,
Berlin
,
1995
), pp.
44
118
.
54.
A. M.
Lyapunov
, “
The general problem of the stability of motion
,”
Int. J. Control
55
(
3
),
531
534
(
1992
).
55.
A.
Pikovsky
and
A.
Politi
,
Lyapunov Exponents: a Tool to Explore Complex Dynamics
(
Cambridge University Press
,
2016
).