There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.

In biological neural networks, scale-free avalanches of neuronal firing events have suggested that such networks might preferably operate at criticality, in particular, since theoretical studies of artificial neural networks and of cellular automata have highlighted some potential computational benefits of such a state. In these studies, notions of either edge-of-chaos criticality or avalanche criticality were adhered to. Here, using a recurrent neural network of more realistic neurons compared with what has been considered previously, we scrutinize whether these two manifestations of criticality are necessarily co-occurring. Based on a realistic paradigm of neural networks, we show that a positive largest Lyapunov exponent—indicating chaotic dynamics of the network—is conserved as we tune the network from subcritical to critical and to supercritical avalanche behavior. This demonstrates that avalanche criticality does not necessarily co-occur with edge-of-chaos criticality.

In the endeavor of understanding the functioning of the brain, the hypothesis has emerged that biological neural networks might operate at criticality.1–3 The promise of this hypothesis is that at the critical point, the particular details of the system's individual elements, and their interaction laws cease to have importance.4 In this case, the phase transition itself dominates the behavior of the system and therefore many astounding anatomical and biophysical details of neural circuits would surrender to some very generic network properties, which would permit to describe the fundamentals of the ongoing information processing and computation—at least for this case—in a simple way. Several computational advantages of criticality that render such a state particularly attractive have also been exhibited, such as optimized information transmission and capacity, increased flexibility of responses,5,6 and more.

A fingerprint of criticality is power law distributions of the properties exhibited by local descriptors when evaluated across the ensemble. Such distributions were found in the statistics of spontaneous avalanches in cortical tissue recorded with multi-electrode arrays1,7,8 and, more recently, in the auditory system.9 In addition to this evidence, distinct potential mechanisms for the emergence of power-law like distributions have been suggested as well,10,11 and the emergence of power-law avalanche distributions has even been questioned in some experiments.12 As a result, the avalanche criticality hypothesis1,13 has remained controversial.

In recurrent neural networks, also edge-of-chaos14 criticality has been studied, mostly in the context of reservoir computing.15,16 For best task performance (in the edge-of-chaos sense of “computation”), a network requires properties somewhat analogous to the ones ascribed to avalanche criticality: Flexibility to represent spatiotemporally diverse inputs, while essentially preserving distance relationships (i.e., similar inputs should trigger similar responses). This is indeed the case at the system's transition from stable to chaotic dynamics, which is characterized by a vanishing largest Lyapunov exponent. Unfortunately, in the sense of computation as a reduction of complexity of prediction,17 such a “reservoir” is not actually computing; it rather serves as a high-dimensional representation space of spatiotemporal input patterns, from which the readout neurons can sample and perform the computation. This sort of “computation” as “the ability to transmit, store and modify information”14 (which then has been claimed to be optimal at edge-of-chaos) has a different meaning from a computation seen as the step of simplifying, i.e., destroying, information.17 

Despite the links that have been drawn between edge-of-chaos and avalanche criticality18,19 previously, the precise relationship between avalanche and edge-of-chaos criticality is still not settled. While we are not aware of contradicting evidence, a few studies have exhibited simultaneous occurrence of both phase transitions,6,20 but these studies were based on rather simple network models, with nodes having no intrinsic dynamics. In our contribution, we examine whether in a recurrent spiking neural network model with realistic, non-trivial node dynamics, avalanche criticality in fact needs to co-occur with edge-of-chaos criticality. We will show that this is not the case.

Our recurrent neural network model is based on “Rulkov neurons”21 with dynamical synapses as the nodes on a directed graph (see Fig. 1). Its architecture reflects the general features ascribed to cortical networks: it consists of both excitatory (80%) and inhibitory (20%) neurons, its connectivity is sparse (connection probability p c 4 %), and inhibitory synapses are three times stronger than the excitatory ones. The number of neurons in the network is set to N = 128, with N e x = 102 0.8 N excitatory neurons and N i n h = 26 0.2 N inhibitory neurons (rounding to the nearest integer). This network size is chosen to trade off between obtaining enough statistics for the avalanche size distributions and a reasonable expense for calculating network Lyapunov exponents. To reflect the origin and target of the chemical signaling between neurons, the edges between our network nodes are directed. Each neuron i in the network has a number of “presynaptic” neurons j that impinge on it, and each neuron is “postsynaptic” from the perspective of the “presynaptic” neuron; the full relationship is defined by the network's weight matrix w ij, where i , j = 1 , , N, see Fig. 1(b). Specifically, the number of presynaptic excitatory and inhibitory nodes chosen in this work is set to N e x p r e = 4 N e x p c and N i n h p r e = 1 N i n h p c (rounded to the nearest integer). For every neuron in the network, from the pool of excitatory and inhibitory neurons, respectively, N e x p r e and N i n h p r e nodes are selected at random as presynaptic neighbors. By setting the diagonal elements of w to zero, self-connections are eliminated. By construction, network nodes have an in-degree kin of 5 or 4 (the latter case due to the elimination of self-connections). Out-degrees vary more substantially, owing to the described selection process. In this simple and controllable way, heterogeneity is introduced in the network, where neurons with higher out-degree dominate the network activity. The obtained topology could, in a sense, be seen as a simple controllable approximation to in vitro dissociated neural cultures22 that so far have provided the strongest evidence for critical avalanches of spike events.

FIG. 1.

Network architecture: (a) Recurrent network with excitatory neurons (red circles), inhibitory neurons (blue circles), and one intrinsically spiking excitatory neuron (yellow circle). Each neuron receives external excitatory input from a Poisson spike train. (b) Weight matrix w of the network, with excitatory connections (red) and inhibitory connections (blue). Grey lines are for the last excitatory neuron (index j = 102). (c) Iterated Rulkov map for fixed y ( i ) = 2.74 (particular choice for display reasons). (d) Full trajectory for the intrinsically spiking Rulkov neuron (solid gray line) in the absence of external input (orange line: x n ( i ) nullcline y n ( i ) = x n ( i ) ψ / ( 1 x n ( i ) ); dashed orange line: y n ( i ) nullcline x n ( i ) = 1 + σ). Increase of σ from σ 1 = 0.09 to σ 2 = 0.103 shifts the y n ( i ) nullcline vertically (arrow) and changes the map's central fixed point from stable (filled circle) to unstable (empty circle). (e) Map dynamics for a typical neuron (top, σ = σ 1) and (f) for the intrinsically spiking neuron embedded in the network (bottom, σ = σ 2). Red circles denote spike events (i.e., ξ n ( i ) = 1). Time units are Rulkov iterations.

FIG. 1.

Network architecture: (a) Recurrent network with excitatory neurons (red circles), inhibitory neurons (blue circles), and one intrinsically spiking excitatory neuron (yellow circle). Each neuron receives external excitatory input from a Poisson spike train. (b) Weight matrix w of the network, with excitatory connections (red) and inhibitory connections (blue). Grey lines are for the last excitatory neuron (index j = 102). (c) Iterated Rulkov map for fixed y ( i ) = 2.74 (particular choice for display reasons). (d) Full trajectory for the intrinsically spiking Rulkov neuron (solid gray line) in the absence of external input (orange line: x n ( i ) nullcline y n ( i ) = x n ( i ) ψ / ( 1 x n ( i ) ); dashed orange line: y n ( i ) nullcline x n ( i ) = 1 + σ). Increase of σ from σ 1 = 0.09 to σ 2 = 0.103 shifts the y n ( i ) nullcline vertically (arrow) and changes the map's central fixed point from stable (filled circle) to unstable (empty circle). (e) Map dynamics for a typical neuron (top, σ = σ 1) and (f) for the intrinsically spiking neuron embedded in the network (bottom, σ = σ 2). Red circles denote spike events (i.e., ξ n ( i ) = 1). Time units are Rulkov iterations.

Close modal

To model the dynamics exhibited by a neuron of the network labeled by index i = 1 , , 128, we use Rulkov's two-dimensional iterative map, where we denote the iteration step by index n21,23

x n + 1 ( i ) = { ψ 1 x n ( i ) + u n ( i ) x n ( i ) 0 , ψ + u n ( i ) 0 < x n ( i ) < ψ + u n ( i ) x n 1 ( i ) 0 , 1 x n ( i ) ψ + u n ( i ) x n 1 ( i ) > 0 ,
(1a)
y n + 1 ( i ) = y n ( i ) μ ( 1 + x n ( i ) ) + μ σ + μ I n ( i ) ,
(1b)

where u n ( i ) = y n ( i ) + β I n ( i ). x n ( i ) models the neuron's membrane voltage, whereas y n ( i ) describes a regulatory subsystem able to turn the firing on and off (Fig. 1(c)). I n ( i ) contains the postsynaptic input to neuron i (see below). Parameter σ controls the state of the map (Fig. 1(d)), where σ = 2 ψ / ( 1 μ ) at the bifurcation point. The parameter values for excitatory and inhibitory neurons are identical: ψ = 3.6 , μ = 0.001 , σ = 0.09 , β = 0.133. For these values, at σ = 0.101684, Rulkov's map undergoes a subcritical Neimark–Sacker bifurcation from silent to spiking behavior, corresponding to a Class II neuron behavior.24 Rulkov neurons can reproduce essentially all experimentally observed spike patterns,23 and even finer neurobiological details, such as phase response curves of biological neurons.25 By this feature, our network model distinguishes substantially from the previous efforts of linking avalanche and edge-of-chaos criticality based on probabilistic binary units6 or analog rate neurons.20 

Rulkov neurons interact by means of synapses that are attached at the message-receiving side of the neuron. Occasionally, synapses receive input from other neurons in the form of spikes. A spike variable ξ n ( i ) carries the value 1 if at iteration n, neuron i has generated a spike, and a value 0 otherwise:

ξ n + 1 ( i ) = { 1 0 < x n ( i ) < ψ + u n ( i ) x n 1 ( i ) 0. 0 otherwise .
(2)

i.e., neuron i is firing at iteration n, if x n ( i ) attains the maximum value (red horizontal line in Fig. 1(c)). Synapses have their own dynamics that are modeled by an exponential decay and step-like increase upon presynaptic spike events as

I n + 1 ( i ) = η I n ( i ) + W ( j = 1 N e x w i j ( x r p e x x n ( i ) ) ξ n ( j ) + j = N e x + 1 N w i j ( x r p i n h x n ( i ) ) ξ n ( j ) + w e x t ( x r p e x x n ( i ) ) ξ n e x t ( i ) ) .
(3)

Here, η controls the decay rate of the synaptic current and x r p e x and x r p i n h are the reversal potentials for excitatory and inhibitory synapses, respectively. When there is no connection between the neurons i and j ( w i j = 0, where i is the index of the postsynaptic neuron and j is the index of the presynaptic neuron, see Fig. 1(b)), or if there has not been a presynaptic spike event ( ξ n ( j ) = 0), the corresponding entry in the sum vanishes. The decay parameter is chosen as η = 0.75, the reversal potentials as x r p e x = 0 and x r p i n h = 1.1, and the external input weight as wext = 0.6. Internal excitatory connections have a weight of wij = 0.6 as well, whereas inhibitory connections have a tripled weight of wij = 1.8. By joining the role of σ, I ( i ) can push intrinsically silent Rulkov neurons to emit spikes (Figs. 1(d) and 1(e)). W is a connectivity-scaling factor; increasing it increases the coupling among the neurons without changing architecture otherwise. This is in contrast to processes like Hebbian learning or mechanisms of synaptic plasticity used in other approaches. By changing W, diverse activity states can be accessed.

Being composed, so far, of intrinsically non-spiking neurons, to become excited our network requires internal or external excitatory sources. In our approach, we implemented both. The internal one is a localized source of activity representing a so-called “nucleation site”26 or a “leader neuron.”27 This is implemented by putting one of the network's excitatory neurons above firing threshold (from σ = 0.09 to σ = 0.103), which causes this neuron to fire in a subtly chaotic manner. This firing behavior is then additionally modified by means of recurrent connections from the network (cf. Figs. 1(a), 1(d), and 1(f)). The external source of activity captures the influence of noise on neuronal activity (such as by spontaneous neurotransmitter vesicle release or quickly changing external stimulation). This aspect is modeled by an individual excitatory Poisson spike train input to each neuron, represented by an external spike variable ξ n e x t ( i ) having value 1 when the i-th neuron receives an external spike and 0 otherwise

ξ n + 1 e x t ( i ) = { 1 p < p e x t , 0 p p e x t ,
(4)

where p is a random number drawn at each time step from a uniform distribution in the open interval (0,1). Choosing p e x t = 6 × 10 4 renders the external input temporally sparse.

A single simulation covered 5 × 10 5 time steps, of which the first 5000 steps were discarded. For each value of W, we picked 50 simulations that exhibited a requested level of activity (at the critical point we occasionally increased this number). The precise requirement was that the average inter-event interval I E I (i.e., the average time between two subsequent spikes in the network) of a simulation run should fall into the interval μ I E I ± σ I E I / 1.5, where μ I E I and σ I E I are the mean and standard deviation of the I E I distribution across all network simulations at a particular W. Such sampling ensures that results from typical network realizations are looked at, and it permits the pooling of the results from different simulations.

Following the approach taken in experimental investigations,1 we chose a binning of time of width Δ t = I E I and defined neuronal avalanches as the maximal extensions of nonempty adjacent bins. Experimental investigations originally focused on avalanches of spikes in population activity inferred from local field potential recordings,1,5 but later also individual neuron action potentials were considered.7,8,28–30 Avalanche size S was measured as the total number of spikes within the avalanche; avalanche lifetime T was measured by the number of time bins an avalanche spans. Our exponents characterizing the avalanche size and lifetime distributions are maximum likelihood estimates, and the goodness of fit was evaluated following Ref. 31 (see the  Appendix).

A weight scaling W ( 0.13 , 0.139 ) exhibited subcritical, and W ( 0.139 , 0.15 ) supercritical avalanche behavior, respectively (Fig. 2). The increase of W led to an overall increase in network activity (Fig. 2(a)), resulting in I E I -values of 110, 48, and 8 Rulkov time steps (rounded), for the subcritical, critical, and supercritical networks, respectively. The avalanche size distribution of the critical network follows a power law, P ( S ) S α, with exponent α 2.45 (Fig. 2(b)). Because of the network's finite size of N = 128 elements, a noisy cut-off after S 100 must be expected.

FIG. 2.

Change of network characteristics upon the weight scaling parameter W's increase from 0.13, to 0.139, to 0.15, from subcritical (left), to critical (middle), to supercritical (right column) avalanche behavior, respectively (distinct weight configurations underlie each plot). (a) Raster plots of spiking activity, where the first row represents the intrinsically spiking neuron. (b) Distributions of avalanche size S. (c) Distributions of avalanche lifetime T, using Δ t = I E I as units, to improve comparability of results. The critical network exhibits a power law size distribution for S { 7 , , 100 }, with exponent α = 2.45 (p-value = 0.107, see text), whereas the subcritical size distribution follows an exponential decay with decay constant ε = 0.21 (p-value = 0.18). Lifetime distributions (effects not statistically secured): For the critical network, a power law with exponent τ 3 is estimated for T { 7 , , 60 }, whereas for the subcritical network an exponential decay with decay constant ε = 0.40 seems to be the best fit. Red dashed lines indicate maximum likelihood fit.

FIG. 2.

Change of network characteristics upon the weight scaling parameter W's increase from 0.13, to 0.139, to 0.15, from subcritical (left), to critical (middle), to supercritical (right column) avalanche behavior, respectively (distinct weight configurations underlie each plot). (a) Raster plots of spiking activity, where the first row represents the intrinsically spiking neuron. (b) Distributions of avalanche size S. (c) Distributions of avalanche lifetime T, using Δ t = I E I as units, to improve comparability of results. The critical network exhibits a power law size distribution for S { 7 , , 100 }, with exponent α = 2.45 (p-value = 0.107, see text), whereas the subcritical size distribution follows an exponential decay with decay constant ε = 0.21 (p-value = 0.18). Lifetime distributions (effects not statistically secured): For the critical network, a power law with exponent τ 3 is estimated for T { 7 , , 60 }, whereas for the subcritical network an exponential decay with decay constant ε = 0.40 seems to be the best fit. Red dashed lines indicate maximum likelihood fit.

Close modal

In the subcritical case, the avalanches are generally small and their size distribution decays exponentially, while in the supercritical case, the increased number of large avalanches results in a characteristic hump toward the end of the distribution.32 A similar metamorphosis of the distribution shape is observed for avalanche lifetimes. At critical avalanche behavior, the lifetime distribution also follows a power law (exponent τ 3.0, Fig. 2(c)). The fit is, however, somewhat less convincing than the one obtained for the size distribution, which is a commonly observed phenomenon in electrophysiological experiments.1,28,29

Power law distributions can, in principle, be caused by several mechanisms. To confirm that the network is at (approximate) criticality, we performed the following tests. For genuine scale-free behavior, the choice of the temporal bin size, Δ ̃ t, should not affect the avalanche size distribution (cf. Ref. 33). Fig. 3(a) exhibits that in the critical case, choosing different binnings Δ ̃ t = m Δ t , m { 0.25 , 0.5 , 1.0 , 1.5 , 2.0 } only mildly affects the distribution, whereas the effects are markedly stronger in the subcritical and the supercritical cases.

FIG. 3.

Criticality tests. (a) Avalanche size distributions based on time bins of size Δ ̃ t = m · I E I , m { 0.25 , 0.5 , 1.0 , 1.5 , 2.0 }, for subcritical (left), critical (middle), and supercritical states (right column). (b) Mean avalanche size as a function of lifetime for critical (top) and supercritical (bottom) states. The red dashed line exhibits a power law relationship S ( T ) T γ. (c) Avalanche shapes of the critical state show a noisy collapse ( T { 25 , 30 , , 50 }, from darker to lighter color), expressing a high degree of self-similarity, but also the particular role of the intrinsically spiking neuron (see the  Appendix for details). (d) For the supercritical state, rescaling does not lead to a collapse ( T { 25 , 30 , , 60 }).

FIG. 3.

Criticality tests. (a) Avalanche size distributions based on time bins of size Δ ̃ t = m · I E I , m { 0.25 , 0.5 , 1.0 , 1.5 , 2.0 }, for subcritical (left), critical (middle), and supercritical states (right column). (b) Mean avalanche size as a function of lifetime for critical (top) and supercritical (bottom) states. The red dashed line exhibits a power law relationship S ( T ) T γ. (c) Avalanche shapes of the critical state show a noisy collapse ( T { 25 , 30 , , 50 }, from darker to lighter color), expressing a high degree of self-similarity, but also the particular role of the intrinsically spiking neuron (see the  Appendix for details). (d) For the supercritical state, rescaling does not lead to a collapse ( T { 25 , 30 , , 60 }).

Close modal

We moreover checked whether all avalanches of the critical state would collapse to one characteristic shape, upon a corresponding rescaling of time.30,34 To this end, the shape V(T, t) of an avalanche of lifetime T is calculated. V(T, t) expresses the temporal evolution (time variable t) of its “shape” measured by the number of spikes emitted in the temporal bin around time t. For each T, the average avalanche shape V ( T , t / T ) is calculated. From the scaling ansatz between the mean size of avalanches S ( T ) and their lifetime T, S ( T ) T γ, the critical exponent γ is obtained. From this, the universal scaling function representing the characteristic shape of all avalanches, emerges as V ( t / T ) = T 1 γ V ( T , t / T ). For our “critical” network, S ( T ) indeed follows a power law, with exponent γ 1.37 (Fig. 3(b)). In the case of our supercritical network, a smaller range of the function also follows a power law, which permits the comparison between the self-similarities of the avalanche shapes of the critical and the supercritical state. For the critical network, we observe a noisy collapse of the avalanche shapes of duration T 25 (Fig. 3(c)), which is not the case for the supercritical network (Fig. 3(d)). Because of the strong influence of the intrinsically spiking neuron (for further evidence, see the  Appendix), avalanche shapes of short lifetimes (T < 25) fail to exhibit a nice collapse. Generally, universal scaling at shortest length scales should not be expected, as individual system part behavior can be stronger than the collective behavior at these scales.34 As a final test we examined whether the crackling noise relationship between critical exponents, ( τ 1 ) / ( α 1 ) = γ, would hold,30,34 for our critical network. The critical exponents of avalanche lifetime distribution ( τ = 3.0), avalanche size distribution ( α = 2.45), and the function of the mean avalanche size depending on the lifetime ( γ = 1.37) fulfill the required relation remarkably well. Taken together, power law distributions, self-similarity of avalanche shapes, and an excellent fulfillment of the fundamental relation between critical exponents strongly suggest that our “critical” network is indeed from the close vicinity of a critical network state.

To determine whether avalanche criticality is confined to the edge-of-chaos, we calculated the Lyapunov spectrum for the subcritical, the critical, and the supercritical cases, by using the Jacobian matrix evaluated at points along the trajectory of the network's state vector35 (see the  Appendix for further details). For the two notions of criticality to co-occur, we would expect the largest Lyapunov exponent λ1 to be negative, vanishing, and positive, for the three cases, respectively. λ1, however, is positive in all of the three cases (Fig. 4). Across the full parameter neighborhood of avalanche criticality considered ( W ( 0.13 , 0 , 15 )), its numerical value is essentially unchanged ( λ 1 = 0.0089 for the subcritical and critical network and λ 1 = 0.0082 for the supercritical network). If one Rulkov iteration is identified with a duration of 0.5 ms, which is sometimes done to facilitate biological interpretation,23 values around 17 s 1 would be obtained.

FIG. 4.

Lyapunov spectra λd of subcritical (left, W = 0.13), critical (middle, W = 0.139), and supercritical (right, W = 0.15) networks, showing the first d = 1 32 Lyapunov exponents. Positive values of λd are shown in red, and negative ones in blue. Insets: full Lyapunov spectra. We note a positive leading exponent and 3 to 4 following positive exponents for the subcritical and critical case. In the supercritical case, a much larger spread of positive exponents emerges (range of one standard deviation indicated by shading).

FIG. 4.

Lyapunov spectra λd of subcritical (left, W = 0.13), critical (middle, W = 0.139), and supercritical (right, W = 0.15) networks, showing the first d = 1 32 Lyapunov exponents. Positive values of λd are shown in red, and negative ones in blue. Insets: full Lyapunov spectra. We note a positive leading exponent and 3 to 4 following positive exponents for the subcritical and critical case. In the supercritical case, a much larger spread of positive exponents emerges (range of one standard deviation indicated by shading).

Close modal

Lyapunov spectra can, moreover, provide a deeper insight into the observed phenomenon. Upon increasing the synaptic strength, the total number of positive Lyapunov exponents increases as well. Every positive Lyapunov exponent amplifies perturbations of the microstate to an observable macrostate change. The sum of all positive λd gives the total average rate of this amplification, H = λ d > 0 λ d. This sum is also known as the upper bound of the Kolmogorov–Sinai entropy,36 and can be interpreted as an entropy production rate. H increases with stronger synaptic coupling, from 0.014 ± 0.003 (mean ± standard deviation) for the subcritical case, via 0.023 ± 0.006 for the critical case, to 0.044 ± 0.027 for the supercritical case. Therefore, although the supercritical case has a slightly smaller largest Lyapunov exponent, it loses information about a past state at a faster rate.

The majority of the studies on dynamical stability in neural networks have used the perturbation method; i.e., every simulation run of the network activity was repeated after adding a random perturbation δ0 to state vector, so that the largest Lyapunov exponent could be assessed from the evolution of the distance between the network's unperturbed and perturbed trajectories.6,15,16,37 This approach yields only an estimate of λ1, and does not provide information about the rest of the Lyapunov exponents. Moreover, such perturbations may be far away from the perturbation limit δ 0 0 inherent to the definition of Lyapunov exponents, so that it cannot be excluded that the largest Lyapunov exponent obtained following such approaches, depends on the size of the perturbation. Then, instead of a positive value for the first Lyapunov exponent, a negative value could emerge.38 The method employed here does not suffer from such potential shortcomings.

Chaotic dynamics could be a collective effect of the network interactions, or arise simply because nodes themselves have chaotic dynamics. To scrutinize this, we measured the largest Lyapunov exponent of the intrinsically spiking neuron in the absence of network input and found it to be λ 1 0.01. In the presence of external input, the neuron is occasionally silenced (Fig. 1(f)). This behavior is well-known from Class II neurons24 in the vicinity of a Andronov–Hopf bifurcation.39 Because the intrinsically spiking Rulkov neuron used in our simulations is close to the Neimark–Sacker bifurcation, some perturbations are able to push the neuron's state variable close to the unstable fixed point. During the time it takes to escape from the fixed point, the neuron does not fire. As a result of this occasional silencing, the neuron's largest Lyapunov exponent's long-time average drops, embedded in the network, to λ 1 0.009, which is in close agreement with the value of λ1 found in our network. This suggests that the largest Lyapunov exponent of the network essentially captures the dynamics of the intrinsically spiking neuron. In the subcritical and critical cases we find generally between 3 and 4 more positive Lyapunov exponents, which is close to the number of neurons that receive direct inputs from the intrinsically spiking neuron. Therefore, the source of chaos in our networks may originate from this single neuron's dynamics; an increase of coupling transmits its behavior more efficiently into the network.

In the analysis of local field potential avalanches in Ref. 1 and, more recently, also for cochlear activation networks,9 exponents α 1.5 were measured. Mostly, experimental settings have yielded avalanche size distribution exponents α ( 1.5 , 2.1 ).7,8 On this background, the value of α = 2.45 exhibited by our biologically plausible network may seem high; but similar values ( α = 2.5) were observed in simulations of bursting recurrent networks (for “background” avalanches26 that have been linked to critical percolation on a Cayley tree-like network yielding the same value40). Even though strong qualitative changes on the topological network state were enforced, our networks exhibited chaotic dynamics. This demonstrates that avalanche criticality does not necessarily co-occur with edge-of-chaos criticality. Rather, this suggests that in neural networks with non-trivial node dynamics, two separate phase transitions may occur. The high variability of exponent α may express the different network and dynamical conditions under which avalanche criticality is possible, and may point to a link between avalanche and edge-of-chaos criticality, albeit of a much weaker form than is usually assumed (generally, we expect that higher values of this exponent will be related to stronger computational performance).

Our findings suggest that for a full analysis of artificial and simulated neuronal networks, in addition to the one regarding avalanche behavior, an analysis of the dynamical state should be provided as well: Results regarding avalanche criticality obtained for a non-chaotic network might not be relevant for a chaotic network with unpredictable patterns of activity. In addition, our study highlights the presence of a “paradox” that may be of importance for understanding biological network behavior: Upon an increase of the synaptic coupling, chaos may intensify in the sense of a larger entropy production rate, while losing coherence, indicated by a decrease of the largest Lyapunov exponent. As the overall coupling value W is increased beyond avalanche criticality in our network, a distinguished substantial maximum of the entropy production emerges, at close, but clearly distinguishable distance from the latter. For what class of networks such an observation holds, more generally, will be an investigation of interest of its own.

This work was supported by the Swiss National Science Foundation Grant (No. 200021 153542/1), an internal grant of ETHZ (ETH-37 152), and a Swiss-Korea collaboration grant (IZKS2_162190).

1. Distribution parameter estimation

The theoretical fits to the avalanche size and lifetime distributions were found following the guidelines in Ref. 31. We assume that the observations o (avalanche size or lifetime) were sampled independently from a distribution p ( o | α ) parametrized by α. The likelihood of the parameter α is given by the probability of the observations o, given α

L ( α | o ) = m = 1 M p ( o m | α ) ,
(A1)

where M is the number of samples. In practice, we used the logarithm of the likelihood, ( α | o ) = ln L ( α | o ), which allows to replace the product with a sum. Log-likelihood has a maximum at the same α as the likelihood, due to monotonic nature of the logarithm function. Thus the maximum likelihood estimator of the parameter α is

α ̂ = argmax α ( α | o ) = argmax α m = 1 M ln p ( o m | α ) .
(A2)

In the case of a discrete, truncated power law distribution of o with scaling exponent α within the bounds o m i n = a and o m a x = b, the probability of om is

p ( o m | α ) = o m α l = a b l α ,
(A3)

which gives the log-likelihood that needs to be maximized as

( α | o ) = α m = 1 M ln o m M ln l = a b l α .
(A4)

Similarly, we can derive the log-likelihood of the exponential decay constant, ε, of a discrete, truncated exponential distribution of o as

( ε | o ) = ε 1 M m = a b o m ln l = a b e ε l .
(A5)

To determine the range of our fits, we fitted each distribution over a range of [ a , b ], where b was a conveniently chosen cutoff, and followed the procedure outlined in Ref. 31. The maximum likelihood fit was used to generate 1000 surrogate datasets. The surrogates were then fit, in turn, using maximum likelihood, and for each fit, the Kolmogorov–Smirnov distance was calculated. As a measure of plausibility of a fit for a given distribution type, p-values were calculated as the fraction of surrogate data sets with a higher Kolmogorov–Smirnov distance (a worse fit) than the corresponding experimental data fit. The value of a was chosen as the lowest value for which the p-value exceeded 0.05.

2. Avalanche shapes

The obtained individual avalanche shapes V(T, t) are highly variable. For the mean avalanche shape V ( T , t ) calculation, to improve the statistics, also avalanches of lifetimes T ± 2 were included (Fig. 5). The avalanche statistics for each lifetime T thus covered 100 7500 samples, the smaller sample numbers accounting for larger avalanches.

FIG. 5.

(a) Individual avalanches exhibit a highly variable shape: examples of the temporal profiles (spikes vs time) of avalanches with a duration T = 30 ± 2. (b) Mean shape for an avalanche of T = 30, calculated from 789 samples. (c) Average avalanche shapes of T = 25 , 30 , 35 , 40 , 45 , 50.

FIG. 5.

(a) Individual avalanches exhibit a highly variable shape: examples of the temporal profiles (spikes vs time) of avalanches with a duration T = 30 ± 2. (b) Mean shape for an avalanche of T = 30, calculated from 789 samples. (c) Average avalanche shapes of T = 25 , 30 , 35 , 40 , 45 , 50.

Close modal

In the case of the critical network, the periodicity of the intrinsically spiking neuron is roughly 240 Rulkov time steps (jittering around 235-247 time steps with the mode of the distribution at 237), which is equivalent to about 5 temporal bins. The peaks in the average avalanche shapes are at t = 3 , 8 , 13 , 18 , 23 and thus are each separated by one mean interspike interval of the intrinsically firing neuron. For larger t, the peaks are less prominent and their spacing varies more and this relationship is no longer that apparent.

3. Calculation of Lyapunov spectra

The largest Lyapunov exponent λ1, describing the time-averaged rate of the strongest exponential separation of system trajectories in the tangent bundle, is used to determine whether a dynamical system is stable or chaotic. For λ 1 < 0, nearby trajectories converge, while λ 1 > 0 implies divergence of nearby trajectories and hallmarks chaos. At the critical point, λ 1 = 0; in its neighborhood, the system experiences a critical slowing down of the dynamics, where small perturbations can have long-lasting effects. We numerically determined λ1 using the local linearization along the system's trajectory, i.e., the Jacobian matrix of the neural network.35,36 This powerful method not only yields λ1, but provides the whole Lyapunov spectrum, i.e., all Lyapunov exponents of the system.

Every neuron lives in a three-dimensional state space: the two state variables x n ( i ) and y n ( i ), and the synaptic input variable I n ( i ), into which also the temporally sparse external inputs are incorporated. The Jacobian matrix for a single neuron has the form

J n ( i ) = { [ ψ ( 1 x n ( i ) ) 2 1 β μ 1 μ Θ n ( i ) 0 η ] x n ( i ) 0 , [ 0 1 β μ 1 μ Θ n ( i ) 0 η ] 0 < x n ( i ) < ψ + u n ( i ) x n 1 ( i ) 0 , [ 0 0 0 μ 1 μ Θ n ( i ) 0 η ] x n ( i ) ψ + u n ( i ) x n 1 ( i ) > 0 ,
(A6)

where Θ n ( i ) = W ( j = 1 N w i j ξ n ( j ) + w e x t ξ n e x t ( i ) ). The extension of the single neuron Jacobian matrix to the full network is straight-forward: the state variables of a neuron do not directly depend on the state variables of other neurons because the interaction is only through spike events and we can write the Jacobian of the full network, J n n e t, as a 3 N × 3 N block diagonal matrix with the Jacobians of the individual neurons on the diagonal and all other elements being equal to 0.

Lyapunov exponents are obtained by following how a unit sphere On is transformed by the Rulkov Jacobian, into an ellipsoid J n n e t O n. A one-step growth rate of a unit length base vector y n ( d ) , d = 1 3 N is thus given by the length of the mapped vector y n + 1 ( d ) . By applying a Gram–Schmidt orthonormalization procedure, a new unit sphere is reconstructed (with generally rotated base vectors) and the growth rates into their directions are determined anew. Maintaining the initial indexing of the unit base vectors and repeating this procedure, after n iterations the separation of trajectories into the direction described by index d is r n ( d ) = y 1 ( d ) y 2 ( d ) y n ( d ) . Owing to the Gram–Schmidt procedure, for n large, index d = 1 describes the largest and index d = 3 N the smallest, separation in the tangent bundle. If we are interested in exponential separation r n ( d ) = e λ d n, the long-time n behavior of the system is described by the Lyapunov exponents

λ d = lim n 1 n t = 1 n ln y t ( d ) , d = 1 3 N ,
(A7)

where the sign of the first exponent provides the information whether the system is “chaotic” ( λ 1 > 0 ) or not ( λ 1 0 ). The orthonormalization procedure is computationally expensive, which makes the calculation of Lyapunov exponents for large networks slow. We calculated the Lyapunov exponents of the subcritical, critical, and supercritical networks for 10 random configurations out of the 50 that we used to obtain the avalanche size and lifetime distributions. The simulation length for the calculations was kept at 7.5 · 10 4 time steps. The Lyapunov exponents converged well, but, to take care of potential fluctuations, the final value of λd was obtained by averaging over the last 5000 steps.

1.
J. M.
Beggs
and
D.
Plenz
, “
Neuronal avalanches in neocortical circuits
,”
J. Neurosci.
23
,
11167
11177
(
2003
).
2.
D. R.
Chialvo
, “
Emergent complex neural dynamics
,”
Nat. Phys.
6
,
744
750
(
2010
).
3.
T.
Mora
and
W.
Bialek
, “
Are biological systems poised at criticality?
,”
J. Stat. Phys.
144
,
268
302
(
2011
).
4.
H. E.
Stanley
,
Introduction to Phase Transitions and Critical Phenomena
(
Oxford University Press
,
Oxford
,
1987
).
5.
W. L.
Shew
,
H.
Yang
,
S.
Yu
,
R.
Roy
, and
D.
Plenz
, “
Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches
,”
J. Neurosci.
31
,
55
63
(
2011
).
6.
C.
Haldeman
and
J. M.
Beggs
, “
Critical branching captures activity in living neural networks and maximizes the number of metastable states
,”
Phys. Rev. Lett.
94
,
058101
(
2005
).
7.
A.
Mazzoni
,
F. D.
Broccard
,
E.
Garcia-Perez
,
P.
Bonifazi
,
M. E.
Ruaro
, and
V.
Torre
, “
On the dynamics of the spontaneous activity in neuronal networks
,”
PLoS ONE
2
,
e439
(
2007
).
8.
C.
Tetzlaff
,
S.
Okujeni
,
U.
Egert
,
F.
Wörgötter
, and
M.
Butz
, “
Self-organized criticality in developing neuronal networks
,”
PLoS Comput. Bio.
6
,
e1001013
(
2010
).
9.
R.
Stoop
and
F.
Gomez
, “
Auditory power-law activation avalanches exhibit a fundamental computational ground state
,”
Phys. Rev. Lett.
117
,
038102
(
2016
).
10.
J.
Touboul
and
A.
Destexhe
, “
Power-law statistics and universal scaling in the absence of criticality
,”
Phys. Rev. E
95
,
012413
(
2017
).
11.
D.
Berger
,
S.
Joo
,
T.
Lorimer
,
Y.
Nam
, and
R.
Stoop
, “
Power laws in neuronal culture activity from limited availability of a shared resource
,” in
Emergent Complexity from Nonlinearity, in Physics, Engineering and the Life Sciences
, Springer Proceedings in Physics (
Springer
,
2017
).
12.
N.
Dehghani
,
N. G.
Hatsopoulos
,
Z. D.
Haga
,
R. A.
Parker
,
B.
Greger
,
E.
Halgren
,
S. S.
Cash
, and
A.
Destexhe
, “
Avalanche analysis from multielectrode ensemble recordings in cat, monkey, and human cerebral cortex during wakefulness and sleep
,”
Front. Physiol.
3
,
302
(
2012
).
13.
A.
Levina
,
J. M.
Herrmann
, and
T.
Geisel
, “
Dynamical synapses causing self-organized criticality in neural networks
,”
Nat. Phys.
3
,
857
860
(
2007
).
14.
C. G.
Langton
, “
Computation at the edge of chaos: Phase transitions and emergent computation
,”
Physica D
42
,
12
37
(
1990
).
15.
N.
Bertschinger
and
T.
Natschläger
, “
Real-time computation at the edge of chaos in recurrent neural networks
,”
Neural Comput.
16
,
1413
1436
(
2004
).
16.
R.
Legenstein
and
W.
Maass
, “
Edge of chaos and prediction of computational performance for neural circuit models
,”
Neural Networks
20
,
323
334
(
2007
).
17.
R.
Stoop
and
N.
Stoop
, “
Natural computation measured as a reduction of complexity
,”
Chaos
14
,
675
679
(
2004
).
18.
J. M.
Beggs
, “
The criticality hypothesis: How local cortical networks might optimize information processing
,”
Phil. Trans. R. Soc. A
366
,
329
343
(
2007
).
19.
J.
Hesse
and
T.
Gross
, “
Self-organized criticality as a fundamental property of neural systems
,”
Front. Syst. Neurosci.
8
,
166
(
2014
).
20.
M.
Magnasco
,
O.
Piro
, and
G.
Cecchi
, “
Self-tuned critical anti-Hebbian networks
,”
Phys. Rev. Lett.
102
,
258102
(
2009
).
21.
N.
Rulkov
, “
Modeling of spiking-bursting neural behavior using two-dimensional map
,”
Phys. Rev. E
65
,
041922
(
2002
).
22.
J. P.
Eckmann
,
O.
Feinerman
,
L.
Gruendlinger
,
E.
Moses
,
J.
Soriano
, and
T.
Tlusty
, “
The physics of living neural networks
,”
Phys. Rep.
449
,
54
76
(
2007
).
23.
N. F.
Rulkov
,
I.
Timofeev
, and
M.
Bazhenov
, “
Oscillations in large-scale cortical networks: Map-based model
,”
J. Comput. Neurosci.
17
,
203
223
(
2004
).
24.
S. A.
Prescott
,
Y.
DeKoninck
, and
T. J.
Sejnowski
, “
Biophysical basis for three distinct dynamical mechanisms of action potential initiation
,”
PLoS Comput. Biol.
4
,
e1000198
(
2008
).
25.
K.
Kanders
and
R.
Stoop
, “
Phase response properties of Rulkov's model neurons
,” in
Emergent Complexity from Nonlinearity, in Physics, Engineering and the Life Sciences
, Springer Proceedings in Physics (
Springer
,
2017
).
26.
J. G.
Orlandi
,
J.
Soriano
,
E.
Alvarez-Lacalle
,
S.
Teller
, and
J.
Casademunt
, “
Noise focusing and the emergence of coherent activity in neuronal cultures
,”
Nat. Phys.
9
,
582
590
(
2013
).
27.
J. P.
Eckmann
,
S.
Jacobi
,
S.
Marom
,
E.
Moses
, and
C.
Zbinden
, “
Leader neurons in population bursts of 2D living neural networks
,”
New J. Phys.
10
,
015011
(
2008
).
28.
V.
Pasquale
,
P.
Massobrio
,
L. L.
Bologna
,
M.
Chiappalone
, and
S.
Martinoia
, “
Self-organization and neuronal avalanches in networks of dissociated cortical neurons
,”
Neuroscience
153
,
1354
1369
(
2008
).
29.
G.
Hahn
,
T.
Petermann
,
M. N.
Havenith
,
S.
Yu
,
W.
Singer
,
D.
Plenz
, and
D.
Nikolic
, “
Neuronal avalanches in spontaneous activity in vivo
,”
J. Neurophysiol.
104
,
3312
3322
(
2010
).
30.
N.
Friedman
,
S.
Ito
,
B. A. W.
Brinkman
,
M.
Shimono
,
R. E. L.
DeVille
,
K. A.
Dahmen
,
J. M.
Beggs
, and
T. C.
Butler
, “
Universal critical dynamics in high resolution neuronal avalanche data
,”
Phys. Rev. Lett
108
,
208102
(
2012
).
31.
A.
Deluca
and
Á.
Corral
, “
Fitting and goodness-of-fit test of non-truncated and truncated power-law distributions
,”
Acta Geophys.
61
,
1351
1394
(
2013
).
32.
T.
Lorimer
,
F.
Gomez
, and
R.
Stoop
, “
Two universal physical principles shape the power-law statistics of real-world networks
,”
Sci. Rep.
5
,
12353
(
2015
).
33.
V.
Priesemann
,
M.
Wibral
,
M.
Valderrama
,
R.
Pröpper
,
M.
Le Van Quyen
,
T.
Geisel
,
J.
Triesch
,
D.
Nikolic
, and
M. H. J.
Munk
, “
Spike avalanches in vivo suggest a driven, slightly subcritical brain state
,”
Front. Syst. Neurosci.
8
,
108
(
2014
).
34.
J. P.
Sethna
,
K. A.
Dahmen
, and
C. R.
Myers
, “
Crackling noise
,”
Nature
410
,
242
250
(
2001
).
35.
R.
Stoop
and
P. F.
Meier
, “
Evaluation of Lyapunov exponents and scaling functions from time series
,”
J. Opt. Soc. Am. B.
5
,
1037
1045
(
1988
).
36.
J.
Peinke
,
J.
Parisi
,
O. E.
Rössler
, and
R.
Stoop
,
Encounter with chaos: Self-organized hierarchical complexity in semiconductor experiments
(
Springer
,
Berlin
,
1992
).
37.
D.
Zhou
,
A. V.
Rangan
,
Y.
Sun
, and
D.
Cai
, “
Network-induced chaos in integrate-and-fire neuronal ensembles
,”
Phys. Rev. E
80
,
031918
(
2009
).
38.
M.
Monteforte
and
F.
Wolf
, “
Dynamic flux tubes form reservoirs of stability in neuronal circuits
,”
Phys. Rev. X
2
,
041007
(
2012
).
39.
R.
Guttman
,
S.
Lewis
, and
J.
Rinzel
, “
Control of repetitive firing in squid axon membrane as a model for a neurone oscillator
,”
J. Physiol.
305
,
377
395
(
1980
).
40.
D.
Stauffer
and
A.
Aharony
,
Introduction to Percolation Theory
(
Taylor & Francis
,
London
,
1994
).