We derive a mean-field approximation for the macroscopic dynamics of large networks of pulse-coupled theta neurons in order to study the effects of different network degree distributions and degree correlations (assortativity). Using the ansatz of Ott and Antonsen [Chaos 18, 037113 (2008)], we obtain a reduced system of ordinary differential equations describing the mean-field dynamics, with significantly lower dimensionality compared with the complete set of dynamical equations for the system. We find that, for sufficiently large networks and degrees, the dynamical behavior of the reduced system agrees well with that of the full network. This dimensional reduction allows for an efficient characterization of system phase transitions and attractors. For networks with tightly peaked degree distributions, the macroscopic behavior closely resembles that of fully connected networks previously studied by others. In contrast, networks with highly skewed degree distributions exhibit different macroscopic dynamics due to the emergence of degree dependent behavior of different oscillators. For nonassortative networks (i.e., networks without degree correlations), we observe the presence of a synchronously firing phase that can be suppressed by the presence of either assortativity or disassortativity in the network. We show that the results derived here can be used to analyze the effects of network topology on macroscopic behavior in neuronal networks in a computationally efficient fashion.
In April 2013, the U.S. President announced “The Brain Initiative,” an extensive, long range plan of scientific research on human brain function. Computer modeling of brain neural dynamics is an important component of this long-term overall effort. A barrier to such modeling is the practical limit on computer resources, given the enormous number of neurons in the human brain (∼1011). Our work addresses this problem by developing a method for obtaining low dimensional macroscopic descriptions for functional groups consisting of many neurons. Specifically, we formulate a mean-field approximation to investigate macroscopic network effects on the dynamics of large systems of pulse-coupled neurons and use the ansatz of Ott and Antonsen to derive a reduced system of ordinary differential equations describing the dynamics. We find that solutions of the reduced system agree with those of the full network. This dimensional reduction allows for more efficient characterization of system phase transitions and attractors. Our results show the utility of these dimensional reduction techniques for analyzing the effects of network topology on macroscopic behavior in neuronal networks.
Networks of coupled oscillators have been shown to have a wide variety of biological, physical, and engineering applications.1–13 In modelling, the dynamics of such networks, simulating the microscopic behavior at each node, can be a computationally intensive task, especially when the network is extremely large. In this regard, we note that the dimension reduction analyses in Refs. 14–16 have recently proved to be very effective and have been used to derive the macroscopic behavior of large systems of coupled dynamical units in a variety of settings.10,11,17–22 In particular, Refs. 8 and 10–12 consider networks with globally coupled neurons and use these dimension reduction techniques to analyze the macroscopic behavior of the systems.
In 1986, Ermentrout and Kopell23 introduced the theta neuron model. Their work, along with later studies by Ermentrout24 and by Izhikevich,25 established the applicability of the theta neuron model for studying the networks of Class I excitable neurons (as defined by Hodgkin,26 i.e., those neurons whose activity lies near the transition between a resting state and a state of periodic spiking and can exhibit spiking with arbitrarily low frequencies).
Modeling networks of theta neurons in previous studies8,18,22,27 have generally been restricted to particular classes of network topologies. In this paper, we study the macroscopic dynamics of networks of pulse coupled theta neurons on networks with fairly general topologies including arbitrary degree distributions and correlations between the degrees of nodes at opposite ends of a link, resulting in the so-called “assortativity” or “disassortativity.”28 Assortativity (disassortativity) occurs when network nodes connect preferentially to those with similar (different) degrees. We note that studies29–33 have shown the biological relevance of assortativity. Motivated by the results of Restrepo and Ott17 on networks of Kuramoto oscillators, we use a mean field approach in conjunction with the analytical techniques developed by Ott and Antonsen14–16 to study the behavior of pulse coupled theta neurons on networks with arbitrary degree distributions and assortativity. We obtain a reduced system of equations describing the mean-field dynamics of the system, with lower dimensionality compared with the complete set of dynamical equations for the system. This allows us to examine the behavior of the network under various conditions in a computationally efficient fashion. We primarily use the example of a highly skewed degree distribution as an application of the obtained dynamical equations for the order parameter and observe the existence of a partially resting (PR) phase, an asynchronously firing (AF) phase, and a synchronously firing (SF) phase that is sensitive to the presence of assortativity or disassortativity in the network. We also demonstrate that, in contrast to networks with sharply peaked degree distributions, networks with highly skewed degree distributions exhibit different macroscopic dynamics due to the emergence of degree dependent behavior of different oscillators.
The remainder of this paper is organized as follows: In Sec. II, we describe the model of pulse coupled theta neurons used on an arbitrary network. In Sec. III, we setup a mean field description of the behavior on the network and then (Sec. IV) show how the methods developed by Ott and Antonsen14–16 can be used to write a low dimensional set of equations describing the dynamics of the mean field order parameter. In Sec. V, we then use this low dimensional system to describe the behavior of the system under different parameters and network topologies. Section VI concludes the paper with further discussion and summary of the main result.
II. THE MODEL
The theta neuron model encodes the dynamics of a single neuron in isolation as follows:
where θ represents the neuron's state and the parameter η specifies its excitability. The dynamics can be visualized as a point traveling around the unit circle (Fig. 1). A neuronal spike is said to occur each time the phase angle of the neuron, θ, crosses the leftmost point at θ = π. When η < 0, there are two zeros on the right hand side of Eq. (1), representing a stable rest state (solid circle in Fig. 1(a)) and an unstable equilibrium (open circle in Fig. 1(a)). Thus, starting from a typical initial condition, the state of the neuron goes towards the stable equilibrium at the rest state represented by the filled circle. A resting neuron will spike if an external force pushes its state (i.e., the angle θ) from the rest state past the unstable equilibrium (termed as the “spiking threshold”). As η is increased above 0, the neuron exhibits a Saddle Node bifurcation on an Invariant Cycle (SNIC). In this case, there are no fixed points (i.e., no zeros on the right hand side of Eq. (1)), and the neuron now fires periodically, as shown in Fig. 1(c). Note that the neuron does not move at the same rate along the entire circle and may go faster or slower around θ = π dependent on whether η is less than or greater than 1, respectively (e.g., see the plot of versus time in Fig. 1(c)).
The theta neuron model can be extended from a single neuron in isolation to networks of neurons. We consider a system of N theta neurons coupled together in a general network via pulse-like synaptic signals, Ii, to each neuron i
where Aij is the adjacency matrix of a network; Aij = 1 if there is a directed edge from node j to node i and Aij = 0 otherwise. The average degree is then given by . represents the pulse-like synapse, whose sharpness is controlled by the integer parameter n. The normalization constant dn is determined so that , giving . Note that in the case of a fully connected network, where Aij = 1 for all i and j, this model reduces to that of Luke et al.8
III. MEAN FIELD FORMULATION
We consider the limit of many neurons, N ≫ 1, and assume that the network is randomly generated from a given degree distribution P(k) (normalized such that ), where k, the node degree, represents a two-vector of the in-degree and the out-degree (kin and kout). Additionally, we consider an assortativity function , which specifies the probability of a link from a node of degree to one of degree k. In this N → ∞ limit, we assume that the state of the neurons can be represented by a continuous probability distribution, , such that is the probability that a node of degree k has an excitability parameter in the range [η, η + dη] and a phase angle in the range [θ, θ + dθ] at time t. Since we are assuming that the excitability parameters do not vary with time, we define , which is the time independent distribution of the excitability parameters ηi in the network for a randomly chosen node of degree k.
In order to describe the synchronization behavior of this system, we define the order parameter to be34
As in the previous work by Restrepo and Ott,17 we hypothesize that in networks with large nodal degrees, the order parameter can be well approximated via a mean field order parameter, defined by a continuum version of Eq. (4)
Additionally, the distribution f is constrained by the continuity equation,
where vθ is the continuous version of the right hand side of Eq. (2),
IV. DIMENSION REDUCTION
We then use the binomial theorem to expand the pulse function Pn(θ) using
If we now assume a Lorentz distribution of the excitability parameters,
with . This now allows us to rewrite vθ in terms of as
Inserting the forms for g and h from Eqs. (14) and (15) into this expression and evaluating each quantity at the pole, , we obtain a reduced system of equations for , describing the mean field dynamics of the neuronal network,
The mean field order parameter, , can now be written in terms of . Using the assumed form for , we can evaluate the integrals in Eq. (5) using Cauchy's residue theorem to obtain
For the discussion in this paper, we will restrict the assortativity function to be of the form used previously by Restrepo and Ott17
where is defined to ensure that is a valid probability (i.e., ), and
where c is a parameter used to vary the network assortativity (with c > 0 and c < 0 corresponding to assortative and disassortative networks, respectively). In networks with neutral assortativity (c = 0), the probability of forming a link between two nodes is simply proportional to the out-degree of the source node and the in-degree of the target node.
The in-out Pearson assortativity coefficient, r, is a statistic used to characterize the overall assortativity of a network and is defined35 as
where ∑e is the sum over all edges connecting a node of degree to a node of degree k.36,38 Assuming that and that the in- and out-degree distributions are independent, we can relate the assortativity coefficient to the parameter c as
which can be seen by noting that the sum of a quantity , defined on each edge connecting a node of degree to a node of degree k, over edges in our mean field formulation would be given by .
The expression for the assortativity coefficient as a function of c, Eq. (22), is unbounded, while the Pearson assortativity is by definition bounded between −1 and 1. This difference arises because, for sufficiently large c, the assortativity function given in Eq. (20) is not a probability. However, for the network parameters used in our numerical example below, we find that Eq. (22) is very accurate for , corresponding to an assortativity range, .
If we assume that the excitability parameters are drawn from a degree independent distribution () and the 's are given k independent identical initial conditions, , then there are a few notable cases in which particular degree distributions and our chosen assortativity function, Eq. (20), allow for further dimensional reduction. For networks with a delta-function degree distribution, , Eq. (17) reduces to a single equation describing the mean field dynamics,
We note that this equation is identical to earlier results for a fully connected network.8 Thus, networks with only a single allowed degree have identical asymptotic dynamics to a fully connected network. This result is consistent with analogous results by Barlev et al.19 for a network of Kuramoto oscillators. More generally, if the network has a fixed in-degree, , the system is similarly reduced to the single dynamical equation, Eq. (23). On the other hand, if the out-degree is fixed, , then dynamics of is independent of kout, further reducing the dimensionality of the problem.
A. Reduction efficiency
Equation (17) represents a reduction of the original system of N theta neurons to a system with as many equations as there are k values in the support of the degree distribution P(k). We denote this quantity by , which, in the case of independent in and out-degree distributions, is equal to Min × Mout, where Min and Mout are the numbers of possible in-degrees and out-degrees, respectively. In general, simulating the full network, Eq. (2), requires floating point operations per time step. Using the form of the assortativity function given in Eq. (20), the sum over in the reduced system of equations can be split into two sums, each independent of k,
where . Since the two sums in Eq. (24) are independent of k, each must be calculated only once per simulation iteration. Thus, simulating the reduced system, Eq. (17), only requires floating point operations per time step— operations performed once for each of these two sums and operations for each of the equations. In many cases, , so that simulating Eq. (17) is significantly more efficient than simulating the full network. Furthermore, if c is set to 0, which is the case of networks with neutral assortativity, then will have no dependence on kout, and hence, the overall problem is reduced to Min independent equations, allowing even greater computational efficiency.
Since , and are each smoothly varying functions, we can achieve further dimensional reduction by interpolating the summand in Eq. (17) using a coarse-grained grid of k values. In particular, Eq. (17) is not solved for for all of the values of k but only for the small subset of k values that lie on the coarse-grained grid in k-space. The summands on the right hand side of Eq. (17) at k values not on the grid are approximated by a bilinear interpolation of the values at the surrounding chosen k values. To perform the bilinear interpolation, we first interpolate linearly between neighboring grid values in one direction. The value of the summand at a given k value is then approximated by linearly interpolating in the other direction between values estimated with the previous linear interpolation. We find that using as few as 10% of the network degrees yields very accurate results, while an even coarser interpolation still produces the same qualitative behavior as can be seen in Fig. 2.
V. NUMERICAL SIMULATIONS AND RESULTS
In the following examples, we consider a directed network of N = 5000 nodes, with in and out degrees chosen from independent, identical highly skewed distributions. In particular, we use a truncated power law distribution given by
The exponent of the power law component, γ, was set to 3, and kmin and kmax were set to 750 and 2000, respectively. As mentioned earlier, the normalization constant A is chosen to make . We will also set the parameter n controlling the sharpness of the synaptic pulse to 2 for all examples considered and will use an interpolation level of 10% for all calculations using the reduced system of equations for the mean field theory (cf. Fig. 2).
From numerical simulations of the reduced equations, (17), we find that the long term dynamics of the order parameter can be broadly classified into one of the three phases: (1) the partially resting (PR) phase, (2) the asynchronously firing (AF) phase, and (3) the synchronously firing (SF) phase. The PR phase and the AF phase appear as fixed points in the dynamics of the order parameter, whereas the SF phase appears as a limit cycle of the order parameter.
A. Fixed points
As a particular example to illustrate the different types of fixed points, we look at a network with neutral assortativity (c = 0) having excitability parameters distributed according to a Lorentzian distribution with mean η0 = −2 and width Δ = 0.1 (Fig. 3).
When the network is in the PR phase, the order parameter goes to a fixed point that lies near the edge of the unit circle . In this phase, most of the individual neurons in the network are independently in their resting states, in a fashion similar to Fig. 1(a). This corresponds to the case of K = 1 in Fig. 3(a), in which the fixed point is located near the edge of the unit circle marked in black. Further, the time series of a few randomly chosen neurons (Fig. 3(b)) demonstrates that almost all of the neurons are in a resting state. While there may be a small number of neurons that are in the spiking phase due to the spread in the distribution of values of excitability parameters, η, these do not have any significant effect on the full order parameter of the system.
As we increase the coupling constant K, the system transitions to the asynchronously firing (AF) phase, in which the order parameter goes to a fixed point located near the center of the unit circle. In this phase, most of the individual neurons in the network are asynchronously firing, in a fashion similar to Fig. 1(c). This can be seen in the case of K = 6 in Fig. 3(c), which shows that almost all of the neurons are in a recurrent spiking state. Note that even though the neurons are spiking asynchronously, i.e., their firing times are independent of one another,37,39,40 the fixed point of the order parameter is not at . This is because the angular velocity of an individual neuron is not constant along the circle; thus in the average over the ensemble of neurons, a bias is present towards the direction for which the angular velocity of neurons is minimized. As discussed in Sec. II, this may occur at either θ = 0 or at θ = π, dependent on how large the excitability parameter is for the neuron.
We now examine the transition from the PR phase to the AF phase. Microscopically, in the PR phase, almost all of the neurons are individually in a resting phase, whereas in the AF phase, almost all neurons are in the spiking state. To examine the behavior at an intermediate point, we look at the fixed point for the case of K = 3, as shown in Fig. 3(c). At this intermediate value of the coupling constant, a fraction of the neurons are in the spiking state. In particular, the nodes that begin to spike first are those that have larger in-degrees. This is demonstrated in Fig. 4, in which we examine at the fixed point for K = 3. Since we are looking at a network with neutral assortativity (c = 0), Eq. (24) implies that the sum only depends on the out-degree through a common multiplicative factor. Thus, is only plotted as a function of kin. Analogously, for the fixed point of the dynamics on the full network, the range of degrees from kmin to kmax is divided uniformly into several intervals, and for each interval, we find a partial order parameter, calculated such that the average in Eq. (4) is only performed over those nodes whose in-degree lies within that interval, i.e.,
where is the set of nodes having an in-degree within one of the intervals of the range of degrees, is the number of nodes in the set, and kin is the average in-degree of nodes within that set.
In addition, we find that the transition from the PR phase to the AF phase occurs via a hysteretic process mediated by saddle node bifurcations. To illustrate this, we evolved the dynamics of the full network in a step wise fashion by increasing the coupling constant K in small increments of 0.2 and allowing the system to relax to an equilibrium before the next increment (Fig. 5(a)). We also compare this with the analogous hysteresis curve observed for the evolution of the system dynamics on an Erdős-Rényi network having the same size and average degree as the highly skewed degree distribution being considered (Fig. 5(b)). While the hysteretic region begins at around the same value of the coupling constant, K, for both network topologies, we find that for the case of the Erdős-Rényi network, which has a sharply peaked degree distribution, the range in K that allows hysteresis () is significantly larger than the corresponding range for the network with the highly skewed degree distribution ().
To compare with the simulation of the dynamics on the full network, we also calculate the fixed points of the mean field equations, Eq. (17). While the fixed points cannot be readily determined analytically, we can efficiently compute them via a numerical calculation. Setting for the fixed points, we find that the equilibrium satisfies,
where X and Y are given by
B. Limit cycles
As a representative example of limit cycles of , we consider a network with neutral assortativity with excitability parameters η distributed as a Lorentzian with mean η0 = 10.75 and width Δ = 0.5 and with a coupling constant K = −9. In the SF phase, the order parameter goes to a limit cycle in the complex plane. In this phase, a majority of the neurons are synchronously in a spiking state. Plots for such limit cycles are shown in Fig. 6, in which we plot the trajectory of the order parameter in the complex plane (after removing transients) for a network with the highly skewed degree distribution given in Eq. (25) (blue solid curve), a corresponding Erdős-Rényi network having a Poissonian degree distribution (green dashed curve), and a regular network having a delta function degree distribution (i.e., ) (red dotted curve), each having the same average degree of 1090. As seen earlier in Eq. (23), a network with a delta function degree distribution has mean field dynamics identical to those of a fully connected network, and the corresponding limit cycle in Fig. 6 is identical to the limit cycle obtained at these parameters for the fully connected network by Luke et al.8 In comparison with the limit cycles that are observed for the case of the regular network or the Erdős-Rényi network, the limit cycles in networks with highly skewed degree distributions were diminished in size, due to the large variation in nodal behavior as a function of degree. Nodes with smaller in-degrees were observed to predominantly be in the spiking phase, with high synchronization and a larger limit cycle for the partial order parameter, whereas nodes with larger in-degrees were in the resting phase. Due to this differentiation of behavior with the degree, the averaged full order parameter exhibits a limit cycle that is somewhat reduced in size when compared with the results for a fully connected network by Luke et al.8 However, we see that the limit cycles for the Erdős-Rényi network are similar in shape and structure to the limit cycles obtained for the regular network, as would be expected in accordance with the discussion in Sec. IV, since the Poissonian degree distribution for the Erdős-Rényi network is sharply peaked about the average degree and hence cannot admit a large variation of behavior with the nodal degree. As the average degree, , increases, the red and green curves converge because the Poisson degree distribution appropriate for an Erdős-Rényi network approaches a delta function.
C. Effect of assortativity
We now consider the effect of assortativity on the limit cycle dynamics of the order parameter in the network. While limit cycle behavior exists in networks with neutral assortativity (c = 0), introduction of assortativity or disassortativity in the network can cause the limit cycle attractor to transform to a fixed point attractor (AF like state) via a Hopf bifurcation. This is demonstrated in Fig. 7, in which we show that varying c away from zero to ±2.5 (corresponding to a Pearson assortativity coefficient of r ≈ ±0.198) is sufficient to cause the Hopf bifurcation and send the system to a fixed point attractor. The fixed points for the order parameters in these networks exhibit relatively large amounts of finite N induced noise as seen from the size of the clouds surrounding the fixed point position calculated from the reduced system.
Using a mean field approximation, in conjunction with the Ott-Antonsen ansatz, we obtained a reduced system of equations that successfully model the macroscopic order parameter dynamics of a large network of theta neurons. This reduced system of equations allows us to examine the effects of varying the network parameters and the network topology (in terms of degree distributions and degree correlations) in a computationally efficient fashion. The order parameter of the network is used for describing the macroscopic behavior of the network of theta neurons, whose attractors can be of various types. In particular, we find resting states, asynchronously firing states, and synchronously firing states, the first two of which appear as a fixed point for the order parameter (Fig. 3), while the third appears as a limit cycle for the order parameter (Fig. 6). We also used the reduced system of equations to observe the effect of varying the assortativity in the system and demonstrated that a synchronously firing phase was only present for networks with neutral or small assortativity, and the addition of moderate amounts of assortativity or disassortativity to the network causes the system to go to an asynchronously firing state instead (Fig. 7). Further, for networks with highly skewed degree distributions, we find that nodes with different values of their degrees admit a large variation of behavior (Fig. 4), a phenomenon not possible in networks with all-to-all connectivity. In all cases, close agreement was observed between the order parameter dynamics as predicted by the reduced system of equations (Eq. (17)) and as calculated by evolution of the full system of equations Eq. (2).
This work was supported by the Army Research Office under Grant No. W911NF-12-1-0101 and by the National Science Foundation under Grant No. PHY-1461089.
Some authors, such as Restrepo and Ott17 define the order parameter differently so as to be weighted with the out-degree at each node, i.e., .
For another, often useful, definition of a coefficient quantitatively characterizing the assortativity or disassortativity of a network see Ref. 38.
This definition of asynchronous spiking is consistent with remarks by other authors,39,40 wherein asynchronous states have been defined as states in which at each neuron the term coupling it to the other neurons in the network is independent of time, as is observed in the cases of fixed points.