There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.

In biological neural networks, scale-free avalanches of neuronal firing events have suggested that such networks might preferably operate at criticality, in particular, since theoretical studies of artificial neural networks and of cellular automata have highlighted some potential computational benefits of such a state. In these studies, notions of either edge-of-chaos criticality or avalanche criticality were adhered to. Here, using a recurrent neural network of more realistic neurons compared with what has been considered previously, we scrutinize whether these two manifestations of criticality are necessarily co-occurring. Based on a realistic paradigm of neural networks, we show that a positive largest Lyapunov exponent—indicating chaotic dynamics of the network—is conserved as we tune the network from subcritical to critical and to supercritical avalanche behavior. This demonstrates that avalanche criticality does not necessarily co-occur with edge-of-chaos criticality.

## I. INTRODUCTION

In the endeavor of understanding the functioning of the brain, the hypothesis has emerged that biological neural networks might operate at criticality.^{1–3} The promise of this hypothesis is that at the critical point, the particular details of the system's individual elements, and their interaction laws cease to have importance.^{4} In this case, the phase transition itself dominates the behavior of the system and therefore many astounding anatomical and biophysical details of neural circuits would surrender to some very generic network properties, which would permit to describe the fundamentals of the ongoing information processing and computation—at least for this case—in a simple way. Several computational advantages of criticality that render such a state particularly attractive have also been exhibited, such as optimized information transmission and capacity, increased flexibility of responses,^{5,6} and more.

A fingerprint of criticality is power law distributions of the properties exhibited by local descriptors when evaluated across the ensemble. Such distributions were found in the statistics of spontaneous avalanches in cortical tissue recorded with multi-electrode arrays^{1,7,8} and, more recently, in the auditory system.^{9} In addition to this evidence, distinct potential mechanisms for the emergence of power-law like distributions have been suggested as well,^{10,11} and the emergence of power-law avalanche distributions has even been questioned in some experiments.^{12} As a result, the *avalanche criticality* hypothesis^{1,13} has remained controversial.

In recurrent neural networks, also *edge-of-chaos*^{14} criticality has been studied, mostly in the context of reservoir computing.^{15,16} For best task performance (in the edge-of-chaos sense of “computation”), a network requires properties somewhat analogous to the ones ascribed to avalanche criticality: Flexibility to represent spatiotemporally diverse inputs, while essentially preserving distance relationships (i.e., similar inputs should trigger similar responses). This is indeed the case at the system's transition from stable to chaotic dynamics, which is characterized by a vanishing largest Lyapunov exponent. Unfortunately, in the sense of computation as a reduction of complexity of prediction,^{17} such a “reservoir” is not actually computing; it rather serves as a high-dimensional representation space of spatiotemporal input patterns, from which the readout neurons can sample and perform the computation. This sort of “computation” as “the ability to transmit, store and modify information”^{14} (which then has been claimed to be optimal at *edge-of-chaos*) has a different meaning from a computation seen as the step of simplifying, i.e., destroying, information.^{17}

Despite the links that have been drawn between edge-of-chaos and avalanche criticality^{18,19} previously, the precise relationship between avalanche and edge-of-chaos criticality is still not settled. While we are not aware of contradicting evidence, a few studies have exhibited simultaneous occurrence of both phase transitions,^{6,20} but these studies were based on rather simple network models, with nodes having no intrinsic dynamics. In our contribution, we examine whether in a recurrent spiking neural network model with realistic, non-trivial node dynamics, avalanche criticality in fact needs to co-occur with edge-of-chaos criticality. We will show that this is not the case.

## II. NEURAL NETWORK MODEL

Our recurrent neural network model is based on “Rulkov neurons”^{21} with dynamical synapses as the nodes on a directed graph (see Fig. 1). Its architecture reflects the general features ascribed to cortical networks: it consists of both excitatory (80%) and inhibitory (20%) neurons, its connectivity is sparse (connection probability $ p c \u2243 4 %$), and inhibitory synapses are three times stronger than the excitatory ones. The number of neurons in the network is set to *N* = 128, with $ N e x = 102 \u2243 0.8 N$ excitatory neurons and $ N i n h = 26 \u2243 0.2 N$ inhibitory neurons (rounding to the nearest integer). This network size is chosen to trade off between obtaining enough statistics for the avalanche size distributions and a reasonable expense for calculating network Lyapunov exponents. To reflect the origin and target of the chemical signaling between neurons, the edges between our network nodes are directed. Each neuron *i* in the network has a number of “presynaptic” neurons *j* that impinge on it, and each neuron is “postsynaptic” from the perspective of the “presynaptic” neuron; the full relationship is defined by the network's weight matrix $ w ij$, where $ i , j = 1 , \u2026 , N$, see Fig. 1(b). Specifically, the number of presynaptic excitatory and inhibitory nodes chosen in this work is set to $ N e x p r e = 4 \u2243 N e x p c$ and $ N i n h p r e = 1 \u2243 N i n h p c$ (rounded to the nearest integer). For every neuron in the network, from the pool of excitatory and inhibitory neurons, respectively, $ N e x p r e$ and $ N i n h p r e$ nodes are selected at random as presynaptic neighbors. By setting the diagonal elements of *w* to zero, self-connections are eliminated. By construction, network nodes have an in-degree *k _{in}* of 5 or 4 (the latter case due to the elimination of self-connections). Out-degrees vary more substantially, owing to the described selection process. In this simple and controllable way, heterogeneity is introduced in the network, where neurons with higher out-degree dominate the network activity. The obtained topology could, in a sense, be seen as a simple controllable approximation to

*in vitro*dissociated neural cultures

^{22}that so far have provided the strongest evidence for critical avalanches of spike events.

To model the dynamics exhibited by a neuron of the network labeled by index $ i = 1 , \u2026 , 128$, we use Rulkov's two-dimensional iterative map, where we denote the iteration step by index *n*^{21,23}

where $ u n ( i ) = y n ( i ) + \beta I n ( i )$. $ x n ( i )$ models the neuron's membrane voltage, whereas $ y n ( i )$ describes a regulatory subsystem able to turn the firing on and off (Fig. 1(c)). $ I n ( i )$ contains the postsynaptic input to neuron *i* (see below). Parameter *σ* controls the state of the map (Fig. 1(d)), where $ \sigma = 2 \u2212 \psi / ( 1 \u2212 \mu )$ at the bifurcation point. The parameter values for excitatory and inhibitory neurons are identical: $ \psi = 3.6 , \u2009 \mu = 0.001 , \u2009 \sigma = 0.09 , \u2009 \beta = 0.133$. For these values, at $ \sigma = 0.101684$, Rulkov's map undergoes a subcritical Neimark–Sacker bifurcation from silent to spiking behavior, corresponding to a Class II neuron behavior.^{24} Rulkov neurons can reproduce essentially all experimentally observed spike patterns,^{23} and even finer neurobiological details, such as phase response curves of biological neurons.^{25} By this feature, our network model distinguishes substantially from the previous efforts of linking avalanche and edge-of-chaos criticality based on probabilistic binary units^{6} or analog rate neurons.^{20}

Rulkov neurons interact by means of synapses that are attached at the message-receiving side of the neuron. Occasionally, synapses receive input from other neurons in the form of spikes. A spike variable $ \xi n ( i )$ carries the value 1 if at iteration *n*, neuron *i* has generated a spike, and a value 0 otherwise:

i.e., neuron *i* is firing at iteration *n*, if $ x n ( i )$ attains the maximum value (red horizontal line in Fig. 1(c)). Synapses have their own dynamics that are modeled by an exponential decay and step-like increase upon presynaptic spike events as

Here, *η* controls the decay rate of the synaptic current and $ x r p e x$ and $ x r p i n h$ are the reversal potentials for excitatory and inhibitory synapses, respectively. When there is no connection between the neurons *i* and *j* ( $ w i j = 0$, where *i* is the index of the postsynaptic neuron and *j* is the index of the presynaptic neuron, see Fig. 1(b)), or if there has not been a presynaptic spike event ( $ \xi n ( j )$ = 0), the corresponding entry in the sum vanishes. The decay parameter is chosen as $ \eta = 0.75$, the reversal potentials as $ x r p e x = 0$ and $ x r p i n h = \u2212 1.1$, and the external input weight as *w _{ext}* = 0.6. Internal excitatory connections have a weight of

*w*= 0.6 as well, whereas inhibitory connections have a tripled weight of

_{ij}*w*= 1.8. By joining the role of

_{ij}*σ*, $ I ( i )$ can push intrinsically silent Rulkov neurons to emit spikes (Figs. 1(d) and 1(e)).

*W*is a connectivity-scaling factor; increasing it increases the coupling among the neurons without changing architecture otherwise. This is in contrast to processes like Hebbian learning or mechanisms of synaptic plasticity used in other approaches. By changing

*W*, diverse activity states can be accessed.

Being composed, so far, of intrinsically non-spiking neurons, to become excited our network requires internal or external excitatory sources. In our approach, we implemented both. The internal one is a localized source of activity representing a so-called “nucleation site”^{26} or a “leader neuron.”^{27} This is implemented by putting one of the network's excitatory neurons above firing threshold (from $ \sigma = 0.09$ to $ \sigma = 0.103$), which causes this neuron to fire in a subtly chaotic manner. This firing behavior is then additionally modified by means of recurrent connections from the network (cf. Figs. 1(a), 1(d), and 1(f)). The external source of activity captures the influence of noise on neuronal activity (such as by spontaneous neurotransmitter vesicle release or quickly changing external stimulation). This aspect is modeled by an individual excitatory Poisson spike train input to each neuron, represented by an external spike variable $ \xi n e x t ( i )$ having value 1 when the *i*-th neuron receives an external spike and 0 otherwise

where *p* is a random number drawn at each time step from a uniform distribution in the open interval (0,1). Choosing $ p e x t = 6 \xd7 10 \u2212 4$ renders the external input temporally sparse.

## III. SIMULATIONS

A single simulation covered $ 5 \xd7 10 5$ time steps, of which the first 5000 steps were discarded. For each value of *W*, we picked 50 simulations that exhibited a requested level of activity (at the critical point we occasionally increased this number). The precise requirement was that the average inter-event interval $ \u27e8 I E I \u27e9$ (i.e., the average time between two subsequent spikes in the network) of a simulation run should fall into the interval $ \mu \u27e8 I E I \u27e9 \xb1 \sigma \u27e8 I E I \u27e9 / 1.5$, where $ \mu \u27e8 I E I \u27e9$ and $ \sigma \u27e8 I E I \u27e9$ are the mean and standard deviation of the $ \u27e8 I E I \u27e9$ distribution across all network simulations at a particular *W*. Such sampling ensures that results from typical network realizations are looked at, and it permits the pooling of the results from different simulations.

## IV. RESULTS: AVALANCHE CRITICALITY

Following the approach taken in experimental investigations,^{1} we chose a binning of time of width $ \Delta t = \u27e8 I E I \u27e9$ and defined neuronal avalanches as the maximal extensions of nonempty adjacent bins. Experimental investigations originally focused on avalanches of spikes in population activity inferred from local field potential recordings,^{1,5} but later also individual neuron action potentials were considered.^{7,8,28–30} Avalanche size *S* was measured as the total number of spikes within the avalanche; avalanche lifetime *T* was measured by the number of time bins an avalanche spans. Our exponents characterizing the avalanche size and lifetime distributions are maximum likelihood estimates, and the goodness of fit was evaluated following Ref. 31 (see the Appendix).

A weight scaling $ W \u2208 ( 0.13 , 0.139 )$ exhibited subcritical, and $ W \u2208 ( 0.139 , 0.15 )$ supercritical avalanche behavior, respectively (Fig. 2). The increase of *W* led to an overall increase in network activity (Fig. 2(a)), resulting in $ \u27e8 I E I \u27e9$-values of 110, 48, and 8 Rulkov time steps (rounded), for the subcritical, critical, and supercritical networks, respectively. The avalanche size distribution of the critical network follows a power law, $ P ( S ) \u221d S \u2212 \alpha $, with exponent $ \alpha \u2248 2.45$ (Fig. 2(b)). Because of the network's finite size of *N* = 128 elements, a noisy cut-off after $ S \u2248 100$ must be expected.

In the subcritical case, the avalanches are generally small and their size distribution decays exponentially, while in the supercritical case, the increased number of large avalanches results in a characteristic hump toward the end of the distribution.^{32} A similar metamorphosis of the distribution shape is observed for avalanche lifetimes. At critical avalanche behavior, the lifetime distribution also follows a power law (exponent $ \tau \u2248 3.0$, Fig. 2(c)). The fit is, however, somewhat less convincing than the one obtained for the size distribution, which is a commonly observed phenomenon in electrophysiological experiments.^{1,28,29}

Power law distributions can, in principle, be caused by several mechanisms. To confirm that the network is at (approximate) criticality, we performed the following tests. For genuine scale-free behavior, the choice of the temporal bin size, $ \Delta \u0303 t$, should not affect the avalanche size distribution (cf. Ref. 33). Fig. 3(a) exhibits that in the critical case, choosing different binnings $ \Delta \u0303 t = m \Delta t , \u2009 \u2009 m \u2208 { 0.25 , 0.5 , 1.0 , 1.5 , 2.0}$ only mildly affects the distribution, whereas the effects are markedly stronger in the subcritical and the supercritical cases.

We moreover checked whether all avalanches of the critical state would collapse to one characteristic shape, upon a corresponding rescaling of time.^{30,34} To this end, the shape *V*(*T*, *t*) of an avalanche of lifetime *T* is calculated. *V*(*T*, *t*) expresses the temporal evolution (time variable *t*) of its “shape” measured by the number of spikes emitted in the temporal bin around time *t*. For each *T*, the average avalanche shape $ \u27e8 V \u27e9 ( T , t / T )$ is calculated. From the scaling ansatz between the mean size of avalanches $ \u27e8 S \u27e9 ( T )$ and their lifetime *T*, $ \u27e8 S \u27e9 ( T ) \u221d T \gamma $, the critical exponent *γ* is obtained. From this, the universal scaling function representing the characteristic shape of all avalanches, emerges as $ V ( t / T ) = T 1 \u2212 \gamma \u27e8 V \u27e9 ( T , t / T )$. For our “critical” network, $ \u27e8 S \u27e9 ( T )$ indeed follows a power law, with exponent $ \gamma \u2248 1.37$ (Fig. 3(b)). In the case of our supercritical network, a smaller range of the function also follows a power law, which permits the comparison between the self-similarities of the avalanche shapes of the critical and the supercritical state. For the critical network, we observe a noisy collapse of the avalanche shapes of duration $ T \u2265 25$ (Fig. 3(c)), which is not the case for the supercritical network (Fig. 3(d)). Because of the strong influence of the intrinsically spiking neuron (for further evidence, see the Appendix), avalanche shapes of short lifetimes (*T* < 25) fail to exhibit a nice collapse. Generally, universal scaling at shortest length scales should not be expected, as individual system part behavior can be stronger than the collective behavior at these scales.^{34} As a final test we examined whether the crackling noise relationship between critical exponents, $ ( \tau \u2212 1 ) / ( \alpha \u2212 1 ) = \gamma $, would hold,^{30,34} for our critical network. The critical exponents of avalanche lifetime distribution ( $ \tau = 3.0$), avalanche size distribution ( $ \alpha = 2.45$), and the function of the mean avalanche size depending on the lifetime ( $ \gamma = 1.37$) fulfill the required relation remarkably well. Taken together, power law distributions, self-similarity of avalanche shapes, and an excellent fulfillment of the fundamental relation between critical exponents strongly suggest that our “critical” network is indeed from the close vicinity of a critical network state.

## V. RESULTS: LYAPUNOV SPECTRA

To determine whether avalanche criticality is confined to the edge-of-chaos, we calculated the Lyapunov spectrum for the subcritical, the critical, and the supercritical cases, by using the Jacobian matrix evaluated at points along the trajectory of the network's state vector^{35} (see the Appendix for further details). For the two notions of criticality to co-occur, we would expect the largest Lyapunov exponent *λ*_{1} to be negative, vanishing, and positive, for the three cases, respectively. *λ*_{1}, however, is positive in all of the three cases (Fig. 4). Across the full parameter neighborhood of avalanche criticality considered ( $ W \u2208 ( 0.13 , 0 , 15 )$), its numerical value is essentially unchanged ( $ \lambda 1 = 0.0089$ for the subcritical and critical network and $ \lambda 1 = 0.0082$ for the supercritical network). If one Rulkov iteration is identified with a duration of $ 0.5 \u2009 ms$, which is sometimes done to facilitate biological interpretation,^{23} values around $ 17 s \u2212 1$ would be obtained.

Lyapunov spectra can, moreover, provide a deeper insight into the observed phenomenon. Upon increasing the synaptic strength, the total number of positive Lyapunov exponents increases as well. Every positive Lyapunov exponent amplifies perturbations of the microstate to an observable macrostate change. The sum of all positive *λ _{d}* gives the total average rate of this amplification, $ H = \u2211 \lambda d > 0 \lambda d$. This sum is also known as the upper bound of the Kolmogorov–Sinai entropy,

^{36}and can be interpreted as an entropy production rate.

*H*increases with stronger synaptic coupling, from 0.014 ± 0.003 (mean ± standard deviation) for the subcritical case, via 0.023 ± 0.006 for the critical case, to 0.044 ± 0.027 for the supercritical case. Therefore, although the supercritical case has a slightly smaller largest Lyapunov exponent, it loses information about a past state at a faster rate.

The majority of the studies on dynamical stability in neural networks have used the perturbation method; i.e., every simulation run of the network activity was repeated after adding a random perturbation *δ*_{0} to state vector, so that the largest Lyapunov exponent could be assessed from the evolution of the distance between the network's unperturbed and perturbed trajectories.^{6,15,16,37} This approach yields only an estimate of *λ*_{1}, and does not provide information about the rest of the Lyapunov exponents. Moreover, such perturbations may be far away from the perturbation limit $ \delta 0 \u2192 0$ inherent to the definition of Lyapunov exponents, so that it cannot be excluded that the largest Lyapunov exponent obtained following such approaches, depends on the size of the perturbation. Then, instead of a positive value for the first Lyapunov exponent, a negative value could emerge.^{38} The method employed here does not suffer from such potential shortcomings.

Chaotic dynamics could be a collective effect of the network interactions, or arise simply because nodes themselves have chaotic dynamics. To scrutinize this, we measured the largest Lyapunov exponent of the intrinsically spiking neuron in the absence of network input and found it to be $ \lambda 1 \u2243 0.01$. In the presence of external input, the neuron is occasionally silenced (Fig. 1(f)). This behavior is well-known from Class II neurons^{24} in the vicinity of a Andronov–Hopf bifurcation.^{39} Because the intrinsically spiking Rulkov neuron used in our simulations is close to the Neimark–Sacker bifurcation, some perturbations are able to push the neuron's state variable close to the unstable fixed point. During the time it takes to escape from the fixed point, the neuron does not fire. As a result of this occasional silencing, the neuron's largest Lyapunov exponent's long-time average drops, embedded in the network, to $ \lambda 1 \u2243 0.009$, which is in close agreement with the value of *λ*_{1} found in our network. This suggests that the largest Lyapunov exponent of the network essentially captures the dynamics of the intrinsically spiking neuron. In the subcritical and critical cases we find generally between 3 and 4 more positive Lyapunov exponents, which is close to the number of neurons that receive direct inputs from the intrinsically spiking neuron. Therefore, the source of chaos in our networks may originate from this single neuron's dynamics; an increase of coupling transmits its behavior more efficiently into the network.

## VI. DISCUSSION

In the analysis of local field potential avalanches in Ref. 1 and, more recently, also for cochlear activation networks,^{9} exponents $ \alpha \u2243 1.5$ were measured. Mostly, experimental settings have yielded avalanche size distribution exponents $ \alpha \u2208 ( 1.5 , 2.1 )$.^{7,8} On this background, the value of $ \alpha = 2.45$ exhibited by our biologically plausible network may seem high; but similar values ( $ \alpha = 2.5$) were observed in simulations of bursting recurrent networks (for “background” avalanches^{26} that have been linked to critical percolation on a Cayley tree-like network yielding the same value^{40}). Even though strong qualitative changes on the topological network state were enforced, our networks exhibited chaotic dynamics. This demonstrates that avalanche criticality does not necessarily co-occur with edge-of-chaos criticality. Rather, this suggests that in neural networks with non-trivial node dynamics, two separate phase transitions may occur. The high variability of exponent *α* may express the different network and dynamical conditions under which avalanche criticality is possible, and may point to a link between avalanche and edge-of-chaos criticality, albeit of a much weaker form than is usually assumed (generally, we expect that higher values of this exponent will be related to stronger computational performance).

Our findings suggest that for a full analysis of artificial and simulated neuronal networks, in addition to the one regarding avalanche behavior, an analysis of the dynamical state should be provided as well: Results regarding avalanche criticality obtained for a non-chaotic network might not be relevant for a chaotic network with unpredictable patterns of activity. In addition, our study highlights the presence of a “paradox” that may be of importance for understanding biological network behavior: Upon an increase of the synaptic coupling, chaos may intensify in the sense of a larger entropy production rate, while losing coherence, indicated by a decrease of the largest Lyapunov exponent. As the overall coupling value *W* is increased beyond avalanche criticality in our network, a distinguished substantial maximum of the entropy production emerges, at close, but clearly distinguishable distance from the latter. For what class of networks such an observation holds, more generally, will be an investigation of interest of its own.

## ACKNOWLEDGMENTS

This work was supported by the Swiss National Science Foundation Grant (No. 200021 153542/1), an internal grant of ETHZ (ETH-37 152), and a Swiss-Korea collaboration grant (IZKS2_162190).

### APPENDIX: METHOD DETAILS

##### 1. Distribution parameter estimation

The theoretical fits to the avalanche size and lifetime distributions were found following the guidelines in Ref. 31. We assume that the observations **o** (avalanche size or lifetime) were sampled independently from a distribution $ p ( o | \alpha )$ parametrized by *α*. The likelihood of the parameter *α* is given by the probability of the observations **o**, given *α*

where *M* is the number of samples. In practice, we used the logarithm of the likelihood, $ \u2113 ( \alpha | o ) = ln L ( \alpha | o )$, which allows to replace the product with a sum. Log-likelihood has a maximum at the same *α* as the likelihood, due to monotonic nature of the logarithm function. Thus the maximum likelihood estimator of the parameter *α* is

In the case of a discrete, truncated power law distribution of *o* with scaling exponent *α* within the bounds $ o m i n = a$ and $ o m a x = b$, the probability of *o _{m}* is

which gives the log-likelihood that needs to be maximized as

Similarly, we can derive the log-likelihood of the exponential decay constant, *ε*, of a discrete, truncated exponential distribution of *o* as

To determine the range of our fits, we fitted each distribution over a range of $ [ a , b ]$, where *b* was a conveniently chosen cutoff, and followed the procedure outlined in Ref. 31. The maximum likelihood fit was used to generate 1000 surrogate datasets. The surrogates were then fit, in turn, using maximum likelihood, and for each fit, the Kolmogorov–Smirnov distance was calculated. As a measure of plausibility of a fit for a given distribution type, p-values were calculated as the fraction of surrogate data sets with a higher Kolmogorov–Smirnov distance (a worse fit) than the corresponding experimental data fit. The value of *a* was chosen as the lowest value for which the p-value exceeded 0.05.

##### 2. Avalanche shapes

The obtained individual avalanche shapes *V*(*T*, *t*) are highly variable. For the mean avalanche shape $ \u27e8 V \u27e9 ( T , t )$ calculation, to improve the statistics, also avalanches of lifetimes $ T \xb1 2$ were included (Fig. 5). The avalanche statistics for each lifetime *T* thus covered $ 100 \u2013 7500$ samples, the smaller sample numbers accounting for larger avalanches.

In the case of the critical network, the periodicity of the intrinsically spiking neuron is roughly 240 Rulkov time steps (jittering around 235-247 time steps with the mode of the distribution at 237), which is equivalent to about 5 temporal bins. The peaks in the average avalanche shapes are at $ t = 3 , \u2009 8 , \u2009 13 , \u2009 18 , \u2009 23$ and thus are each separated by one mean interspike interval of the intrinsically firing neuron. For larger *t*, the peaks are less prominent and their spacing varies more and this relationship is no longer that apparent.

##### 3. Calculation of Lyapunov spectra

The largest Lyapunov exponent *λ*_{1}, describing the time-averaged rate of the strongest exponential separation of system trajectories in the tangent bundle, is used to determine whether a dynamical system is stable or chaotic. For $ \lambda 1 < 0$, nearby trajectories converge, while $ \lambda 1 > 0$ implies divergence of nearby trajectories and hallmarks chaos. At the critical point, $ \lambda 1 = 0$; in its neighborhood, the system experiences a critical slowing down of the dynamics, where small perturbations can have long-lasting effects. We numerically determined *λ*_{1} using the local linearization along the system's trajectory, i.e., the Jacobian matrix of the neural network.^{35,36} This powerful method not only yields *λ*_{1}, but provides the whole Lyapunov spectrum, i.e., all Lyapunov exponents of the system.

Every neuron lives in a three-dimensional state space: the two state variables $ x n ( i )$ and $ y n ( i )$, and the synaptic input variable $ I n ( i )$, into which also the temporally sparse external inputs are incorporated. The Jacobian matrix for a single neuron has the form

where $ \Theta n ( i ) = \u2212 W ( \u2211 j = 1 N w i j \xi n ( j ) + w e x t \xi n e x t ( i ) )$. The extension of the single neuron Jacobian matrix to the full network is straight-forward: the state variables of a neuron do not directly depend on the state variables of other neurons because the interaction is only through spike events and we can write the Jacobian of the full network, $ J n n e t$, as a $ 3 N \xd7 3 N$ block diagonal matrix with the Jacobians of the individual neurons on the diagonal and all other elements being equal to 0.

Lyapunov exponents are obtained by following how a unit sphere *O _{n}* is transformed by the Rulkov Jacobian, into an ellipsoid $ J n n e t O n$. A one-step growth rate of a unit length base vector $ y n ( d ) , \u2009 d = 1 \u2026 3 N$ is thus given by the length of the mapped vector $ \Vert y n + 1 ( d ) \Vert $. By applying a Gram–Schmidt orthonormalization procedure, a new unit sphere is reconstructed (with generally rotated base vectors) and the growth rates into their directions are determined anew. Maintaining the initial indexing of the unit base vectors and repeating this procedure, after

*n*iterations the separation of trajectories into the direction described by index

*d*is $ r n ( d ) = \Vert y 1 ( d ) \Vert \Vert y 2 ( d ) \Vert \u2026 \Vert y n ( d ) \Vert $. Owing to the Gram–Schmidt procedure, for

*n*large, index

*d*= 1 describes the largest and index $ d = 3 N$ the smallest, separation in the tangent bundle. If we are interested in exponential separation $ r n ( d ) = e \lambda d n$, the long-time $ n \u2192 \u221e$ behavior of the system is described by the Lyapunov exponents

where the sign of the first exponent provides the information whether the system is “chaotic” $ ( \lambda 1 > 0 )$ or not $ ( \lambda 1 \u2264 0 )$. The orthonormalization procedure is computationally expensive, which makes the calculation of Lyapunov exponents for large networks slow. We calculated the Lyapunov exponents of the subcritical, critical, and supercritical networks for 10 random configurations out of the 50 that we used to obtain the avalanche size and lifetime distributions. The simulation length for the calculations was kept at $ 7.5 \xb7 10 4$ time steps. The Lyapunov exponents converged well, but, to take care of potential fluctuations, the final value of *λ _{d}* was obtained by averaging over the last 5000 steps.