Networks of spiking neurons constitute analog systems capable of effective and resilient computing. Recent work has shown that networks of symmetrically connected inhibitory neurons may implement basic computations such that they are resilient to system disruption. For instance, if the functionality of one neuron is lost (e.g., the neuron, along with its connections, is removed), the system may be robustly reconfigured by adapting only one global system parameter. How to effectively adapt network parameters to robustly perform a given computation is still unclear. Here, we present an analytical approach to derive such parameters. Specifically, we analyze k-winners-takes-all ( k-WTA) computations, basic computational tasks of identifying the k largest signals from a total of N input signals from which one can construct any computation. We identify and characterize different dynamical regimes and provide analytical expressions for the transitions between different numbers k of winners as a function of both input and network parameters. Our results thereby provide analytical insights about the dynamics underlying k-winner-takes-all functionality as well as an effective way of designing spiking neural network computing systems implementing disruption-resilient dynamics.

Robustness against disruptions constitutes a key requirement for engineered information processing systems. We here discuss how networks of spiking neurons can perform effective and resilient computing. Specifically, we explore networks of symmetrically connected inhibitory neurons that can implement basic computations in a way that is resilient to system disruption such as the complete loss of a neuron. We focus on a specific type of computation called “ k-winners-takes-all” ( k-WTA) that identifies the k largest signals out of a total of N input signals. Several k-WTA circuits may be combined to realize arbitrary computations. We provide an analytical approach to derive parameters that define regions of robust computation for given k and enable the system to effectively adapt after a disruption. Our results provide analytical insights about the dynamics underlying this type of computation and offer one way of designing spiking neural network computing systems that are disruption-resilient.

How can spiking neural networks compute in ways that are robust against external disruptions such as the loss of individual components and how can they be reconfigured to compensate for disruptions? We here study these questions for basic computational tasks known as k-winner-take-all computations that are generalizations of rank ordering and that may be combined to yield universal forms of computations.

Rank ordering of signals by their (average) strength is a fundamental computational operation, particularly useful in attention-related tasks in both natural and artificial systems.1–3 It provides an effective way for extracting the most important information from high-dimensional input spaces by simply ordering in terms of relevance (or strength). However, such operations are computationally costly for high-dimensional inputs, and may retain, at times, a large amount of irrelevant information (depending on the application). A less costly, but closely related, operation is called partial rank ordering, in which a subset of the k strongest out of N > k signals is identified. Often such operations are referred to as k-winner-takes-all ( k-WTA) computations. Still, already for fixed k, combinatorially many computational outputs (results of the task) need to be accounted for by any system that performs k-WTA computations and the number k of “winners” might be variable in addition.

Besides the mathematical analysis of k-WTA computations, a variety of systems have been proposed for their implementation. Particularly, partial rank ordering can be performed by neuron networks.4,5 For example, to date, there exist multiple bio-inspired implementations of one-winner-takes-all ( 1-WTA) functionality.6–8 More general k-WTA operations can be performed by exploiting complex periodic orbits in symmetrical oscillator networks, similar to heteroclinic computing.5,9–15 Furthermore, simple hardware implementation of a neural-circuit performing WTA-calculations has been recently suggested,16 exhibiting a mixture of excitatory and inhibitory couplings. However, with all these approaches, either the number of winners k is fixed or not easily reconfigurable, or computations typically take a long time.5,10–12,14,15,17,18

Recently, a fast and re-configurable k-WTA implementation via a symmetric neural network with inhibitory pulse coupling was proposed.19 Via adjusting a single parameter, the global coupling strength, the number of winners k can be chosen freely and hence be adapted to different scenarios. In contrast to most existing k-WTA implementations, the system is made up from very simple, identical parts and usually converges only within a few spikes. The network presented in Ref. 19 is a computational application of a specific multiplicative coupling scheme in which the effect of an incoming pulse depends linearly on the state of the receiving oscillator. Although closely connected to more common coupling types where the strength of the inhibition does not depend on the oscillator voltage,20 this multiplicative coupling is the most natural way to achieve the specific type of phase compression which is necessary for the k-WTA functionality.

In this article, we analytically characterize the mechanisms of k-WTA computations based on proportional inhibition. We adapt the original Mirollo–Strogatz formalism21 to multiplicative pulse-coupling to analytically describe the dynamics of the k-WTA network proposed in Ref. 19 and explore under which conditions it exhibits the desired computational features. In particular, we identify parameter regions in which the system may perform specific computations and provide analytic expressions for the appropriate coupling strength as a function of external variables. In contrast to the seminal work,19 which uses “brute-force” parameter scans and numerical integration, the current work addresses the question of how to set the global coupling strength in order to obtain collective dynamics with a specified number of k firing neurons with a more analytical approach, hence potentially allowing for a more direct (re-)calibration and thus reconfiguration of the computational network.

We begin with a brief review of k-WTA computations via mutual inhibition.19 Consider a system of N oscillators, neuron-like units working in an oscillatory regime. Each neuron is described by a voltage-like state variable x i ( t ), where i { 1 , , N } is the neuron index. The free (uncoupled) dynamics satisfies
(1)
where f i ( x i ) is a continuous and positive function, f i ( x i ) : [ 0 , 1 ) R +. Moreover, state variables x i are reset to 0 every time they cross a threshold value x i thr from below. Without loss of generality, take x i thr 1. Specifically, the original work19 uses leaky integrate-and-fire neurons as the neuron dynamics, which is defined as
(2)
with positive constant I 0 and γ < I 0, and neuron-specific I i > 0, giving the periodic free time evolution,
(3)
with periodic continuation x i ( t + n T ) = x i ( t ) for n Z, and free period,
(4)
Moreover, a particular form of global inhibitory coupling is considered, which depends linearly on the voltage of the receiving neuron. If at time t j the phase variable x j ( t ) of neuron j reaches the threshold value x thr 1, the phase variables x i ( t ) of all other neurons i j are instantaneously updated via the rule
(5)
with a global coupling constant ϵ ( 0 , 1 ). Illustratively speaking, all neurons receiving a pulse lose a certain amount ϵ of their respective voltage x i ( t j ) before the event.

It has been shown that symmetrical networks of such neuronal units may be used to perform k-WTA calculations.19 An idealized diagram of a complete computing unit is shown in Fig. 1(a). In this context, the variables I i [see Eq. (2)] are taken as input signals, which determine the free dynamics and period lengths T i of the corresponding neurons, see Fig. 1(b). The reset pulses of the system not only mediate the global coupling, but also provide the output of the computational system: After setting the input signals I i, the network converges to a collective dynamic in which only the k fastest neurons reset, leaving the other outputs silent. The underlying mechanism can be summarized as follows. Whenever some neuron i resets, the down-concavity of f i IF ( x i ), together with the proportional coupling as described by Eq. (5) leads to a relative compression of the voltages x i of all neurons i j, see Fig. 1(c) (for details, see Ref. 19). If the differences in input signals I i and the global coupling strength ϵ are both sufficiently large, neurons with shorter free period T i may overtake other neurons repeatedly, keeping them from reaching the threshold value altogether by recurrently inhibiting them, see Fig. 1(d). After a (typically short) transient, the system enters a periodic orbit, in which only the k fastest out of all N neurons reset and send pulses, while the neurons which correspond to the N k smallest input signals stay silent. Within any periodic orbit, the spike patterns of the individual neurons are interpreted as output vector by discriminating only between spiking and non-spiking neurons, so that the N-dimensional, real-valued input vector ( I 1 , , I N ) T is mapped to a binary N-dimensional vector consisting of k ones and N k zeros, identifying the k highest and N k lowest input signals. The exact number of spiking neurons k depends on the particular choice of input signals I i, as well as on the global coupling strength ϵ.

FIG. 1.

k-WTA dynamics in a spiking neural network with proportional inhibitory coupling. (a) Model sketch: symmetric network of N = 5 neurons, interacting via global multiplicative inhibitory coupling. The N-dimensional input vector ( I 1 , , I N ) T sets the intrinsic frequencies of the units. The external input dynamically shapes the evolving periodic spike patterns such that only the k neurons receiving the strongest inputs cross the threshold and send pulses (here k = 2). (b) Free (uncoupled) dynamics of a single neuron, here for a standard leaky integrate-and-fire model, see Eq. (3). The current I i corresponds to the external input signal of the respective neuron i. (c) Reset of neuron i = 4 (blue): Global inhibitory coupling relative to the phase of the receiving neuron decreases the difference of the voltage-like state variables x i ( t ) of the other neurons (only two shown): d 2 = d 1 ( 1 ϵ ) < d 1. (d) Periodic orbit: If the periods of the neurons are sufficiently different due to the external driving, only some k neurons with the highest inputs ever reach threshold.

FIG. 1.

k-WTA dynamics in a spiking neural network with proportional inhibitory coupling. (a) Model sketch: symmetric network of N = 5 neurons, interacting via global multiplicative inhibitory coupling. The N-dimensional input vector ( I 1 , , I N ) T sets the intrinsic frequencies of the units. The external input dynamically shapes the evolving periodic spike patterns such that only the k neurons receiving the strongest inputs cross the threshold and send pulses (here k = 2). (b) Free (uncoupled) dynamics of a single neuron, here for a standard leaky integrate-and-fire model, see Eq. (3). The current I i corresponds to the external input signal of the respective neuron i. (c) Reset of neuron i = 4 (blue): Global inhibitory coupling relative to the phase of the receiving neuron decreases the difference of the voltage-like state variables x i ( t ) of the other neurons (only two shown): d 2 = d 1 ( 1 ϵ ) < d 1. (d) Periodic orbit: If the periods of the neurons are sufficiently different due to the external driving, only some k neurons with the highest inputs ever reach threshold.

Close modal

As outlined in Ref. 19, the considered spiking neural networks can be reconfigured to different required numbers of winners k and input signal differences by adapting one global parameter (the coupling strength) after, for example, the loss of a neuron (and its connections). Under some conditions on the input signal vector and the system parameters, the disrupted and reconfigured network is then capable of performing the same ( k-WTA) computation as the original non-disrupted network.

In this section, we build on numerical simulations to characterize the dependency of the number of winners k on the input signals I i, i { 1 , , N } as well as on the global coupling strength ϵ. For illustrative purposes, we use equally spaced input signals, as in Ref. 19,
(6)
where I 0 1 is a constant offset which is added to all incoming signals and the spacing parameter Δ I is varied to showcase the system’s functionality for different ranges of input signal differences. We remark that the specific input configuration (6) is chosen only for sake of simplicity. In particular, the indexing of the neurons by means of increasing frequency is arbitrary and represents no restriction. Whereas the results will depend on parameters such as the signal spacing, the method presented below applies identically across parameter settings.

Figure 2(a) shows the number of winners k as a function of the input spacing parameter Δ I and the coupling strength ϵ. For each choice of ϵ and Δ I, the collective dynamics is evaluated, starting from 100 different random initial conditions. Throughout, we find the number of winners k to be independent from the specific initial conditions, so that the parameter space consists of cohesive regions with unique k. In particular, we find that a wide range of parameter configurations leads to period-one dynamics. In these, every neuron spikes at most once, so that the sequence length p of the resulting periodic orbit (i.e., the number of reset events per orbit) is equal to the number of spiking neurons, p = k. However, period-one dynamics with different k are usually divided by transitional regions in which neurons spike more than once per period, p > k, see the close-up in Fig. 2(b).

FIG. 2.

Different dynamical regions of the parameter space as obtained by numerical simulations. (a) Sketches of different types of common periodic orbits. While for period-one orbits, the sequence length p is equal to the number of spiking neurons, also more complex orbits with p > k occur. (b) Example network of N = 8 integrate-and-fire neurons [Eq. (3), I 0 = 1, γ = 0.95]: Number of winners k as a function of the coupling strength ϵ and the input spacing Δ I for equally spaced external input signals I i = ( i 1 ) Δ I. Opaque colors represent period-one dynamics with sequence length p = k while semi-transparent colors represent dynamics with p > k. (c) Focus on transition between k = 3 and k = 4. While wide parts of the parameter space exhibit period-one dynamics, the transition toward larger k typically happens upon entering a transitional region with p > k reset events per period. Throughout, 100 simulations per choice of ( Δ I , ϵ ), starting from different random initial conditions. For the considered system, we find the number of winners k to be unique, while the sequence length p might depend on the initial condition in the transitional regions with p > k.

FIG. 2.

Different dynamical regions of the parameter space as obtained by numerical simulations. (a) Sketches of different types of common periodic orbits. While for period-one orbits, the sequence length p is equal to the number of spiking neurons, also more complex orbits with p > k occur. (b) Example network of N = 8 integrate-and-fire neurons [Eq. (3), I 0 = 1, γ = 0.95]: Number of winners k as a function of the coupling strength ϵ and the input spacing Δ I for equally spaced external input signals I i = ( i 1 ) Δ I. Opaque colors represent period-one dynamics with sequence length p = k while semi-transparent colors represent dynamics with p > k. (c) Focus on transition between k = 3 and k = 4. While wide parts of the parameter space exhibit period-one dynamics, the transition toward larger k typically happens upon entering a transitional region with p > k reset events per period. Throughout, 100 simulations per choice of ( Δ I , ϵ ), starting from different random initial conditions. For the considered system, we find the number of winners k to be unique, while the sequence length p might depend on the initial condition in the transitional regions with p > k.

Close modal

In general, the exact structure of the dynamical space might depend strongly on system parameters and on the range of input signals I i. Meanwhile, “brute-force” parameter scans and direct numerical integration as done for Fig. 2 and in Ref. 19 can be computationally expensive.

Hence, in the following, we develop an analytical framework which allows to find the system parameters which exhibit specific types of orbits in a more direct manner, without simulating the underlying system. For instance, given fixed input signals, as, e.g., given by Eq. (6), which coupling strengths enable computations for which k.

As the transitions between different k are closely connected to period-one orbits (or, more precisely, their absence), we put a particular emphasis on the description of period-one orbits, before generalizing our description to arbitrary dynamics.

For mathematical treatment, it is often advantageous to describe neuronal oscillator networks via a phase formalism introduced by Mirollo and Strogatz.21 It maps a system of nonlinear oscillators with linear interactions to an equivalent system of linear oscillators with nonlinear interactions. In this approach, the state of oscillator i is described by a periodic phase variable ϕ i ( t ) : R [ 0 , 1 ], which increases with a constant phase velocity d ϕ i / d t = ω i until it reaches the threshold value ϕ i thr 1 and is reset to 0. The neuron dynamics is contained in the rise function (or neuron potential) U i : [ 0 , 1 ) [ 0 , 1 ) , ϕ i U ( ϕ i ), which is monotonically increasing, twice continuously differentiable, and usually is normalized to U ( 0 ) = 0 and U ( 1 ) = 1. The interaction between neurons is mediated in terms of the transfer function,
(7)
which describes the instantaneous reaction of oscillator i on another oscillator j i reaching the threshold value at time t j, ϕ j ( t j ) = 1,
(8)
Note that definition (7) of the transfer function implements proportional coupling [see Eq. (5)] and deviates slightly from the more commonly used original formulation which describes systems with constant coupling.21 Also we remark that for proportional coupling, negative phases ϕ i < 0 are not necessary, in contrast to many forms of constant inhibitory pulse coupling, see for example Refs. 22 and 23.
For free neuron dynamics provided in terms of a differential equation such as (1), one can set
(9)
with the free period T i of oscillator i. Then, the free time evolution is determined by
(10)
and
(11)
where ϕ i ( t ) is reset to 0 after reaching ϕ thr = 1.
For example, the integrate-and-fire model introduced in Eq. (2) is described as
(12)
and
(13)
with
(14)
As a second example, for neurons with d x i / d t = I 0 + I i, we get a linear rise function
(15)
and also a linear transfer function
(16)

In the following sections, we derive sets of analytical conditions to determine whether a specific periodic orbit is consistent, given specific neuron models, neuron frequencies { ω i }, and coupling strength ϵ. In doing so, we follow the general approach proposed in Ref. 24. Solving these conditions for the global coupling strength ϵ yields the adequate choices for the desired dynamics. In particular, we identify coupling strengths which lead to collective dynamics with a specific number of winners k, and hence calibrate the k-WTA system more directly, with no need for numerical simulations of the collective dynamics.

Every periodic orbit can be characterized by its sequence of neuron resets s i, i { 1 , p }, where the sequence length p is the total number of reset events in the given orbit. For mathematical analysis, it is convenient to encode the reset order of neurons in terms of a mapping σ : ( 1 , , p ) σ ( i ) σ i ( 1 , , N ), such that ( s 1 , , s p ) = ( σ 1 , , σ p ). We also use σ to refer to the orbit itself. The subset of spiking neurons within a specific orbit σ is S S σ = { σ i | i { 1 , , p } }, which contains k neurons. The complement of S σ, S ¯ S ¯ σ = { 1 , , N } S σ contains the N k neurons which do not reset in the considered orbit and conclusively remain silent.

Given a full description of the system parameters, the event sequence s 1 , , s p, or equivalently, the mapping σ, uniquely defines a physical orbit, that is, determines phases ϕ m { 1 , , N } of all neurons at any times, as well as the event times t i, i { 1 , , p } at which the ith neuron in the considered orbit, neuron σ i, spikes, relative to the time t 1 = 0 of the first reset event. We denote the time step between two successive reset events at times t i and t i + 1 of neurons σ i and σ i + 1, respectively, as Δ t i = t i + 1 t i. As the orbits are periodic by definition, we treat σ as cyclical, so that σ i + p σ i and Δ t i + p = Δ t i. Also note that the description of a physical periodic orbit in terms of a mapping σ are unique only up to cyclical permutations of σ.

The approach for finding system parameters which allow for a specific orbit as given by σ is as follows. First, we derive a set of p periodicity conditions, each one guaranteeing the periodicity of a single reset event s i, i { 1 , , p } at time t i. Second, we have to ensure that the assumed spike ordering is consistent in so far that all time steps Δ t i are positive. Third, we have to guarantee that no neuron l S ¯ reaches threshold when the system is in the orbit σ, leading to another set of inequalities. Finally, we have to ensure that neurons j S reach the threshold only at reset times t i with σ i = j, leading to additional inequalities.

In Secs. V AV C, we exemplarily demonstrate the approach for period-one dynamics, that is, orbits where each of the k spiking neurons resets exactly once, so that the number of reset events p is equal to the number of winners k. In doing so, we illustrate that typically violations of the inequalities correspond to transitions between different dynamical regions or numbers of winners k. For a discussion of how to treat arbitrary orbits, we refer to  Appendix A.

In the following, we provide an analytical description of period-one orbits with fixed k and derive expressions for the transitions to other dynamical regimes. In doing so, we explore the microscopic mechanisms mitigating these transitions and demonstrate how to characterize parameter choices that lead to a desired computational functionality.

1. Periodicity of reset events

Consider an arbitrary period-one orbit, that is, an periodic orbit with a total of p = k reset events, defined by a mapping σ, so that neuron σ i, i { 1 , , k } is the neuron that resets at time t i, ϕ σ i ( t i ) = 1 and ϕ σ i ( t i ) 0. Because of the periodicity of the orbit, neuron σ i must reset at time t i + k again,
(17)
For fixed i, starting from ϕ σ i ( t i ) = 0, we successively find expressions for the phase ϕ σ i at later times,
(18)
For each reset event s i of a neuron σ i, i { 1 , , k }, Eq. (18) expresses the periodicity condition ϕ σ i ( t i + k ) = 1 as a function of the time steps Δ t i, i { 1 , , k }. Given fixed coupling strengths, input signals and system parameters, we have a system of k nonlinear equations of the form (18), with p = k unknowns Δ t i, one for each reset event s i.

2. Constraint (1): Positive time steps

For fixed reset sequence σ, equation system (18) generally has an unique solution. However, in the case that the reset order is not compatible with system parameters such as coupling strength ϵ and free neuron frequencies { ω σ i }, negative time intervals Δ t i < 0 may occur, which are not sensible from a physical perspective. Hence, we explicitly require
(19)
as a set of additional constraints on the solution of the periodicity conditions (18).

3. Constraint (2): Only k neurons reach threshold

If within a specific orbit encoded by a mapping σ less than N different neurons are spiking, k < N, we have to explicitly ensure that the N k 1 silent neurons l S ¯ indeed always stay below the threshold 1. Due to the monotonicity of the rise functions U l ( ϕ l ), phase variables ϕ l always have a local maximum just before another oscillator σ i, i { 1 , , k }, reaches threshold at time t i, so that we require
(20)
As ϕ l ( t i ) is a monotonously increasing function of the frequency ω l, we have to evaluate this condition only for the neuron l S ¯ with the largest frequency, which we denote as = a r g m a x l S ¯ ω l in the following, and require
(21)
For a fixed choice of i { 1 , , k }, we use the periodicity of neuron , ϕ ( t i + k ) = ϕ ( t i ), to find
(22)
which gives the phase ϕ ( t i ) as a function of itself and the time steps Δ t j, j { 1 , , k } that are provided as the solution of the periodicity conditions (18). Solving Eq. (22) either analytically or numerically for ϕ ( t i ) allows to evaluate for which values of the inhibition strength ϵ indeed all phases ϕ ( t i ), i { 1 , , k } are smaller than 1. If (20) is not satisfied, the equation system (18) is not consistent, because it neglects reset signals from at least one neuron l S ¯.

Note that Eq. (22) closely resembles the periodicity Eq. (18), but without a reset before the first (or after the last) line. Also note that because of ϕ ( t i ) > 0, the phases at later times are always larger than they would be with a reset at time t i, given fixed { Δ t j } and free frequency ω . This implies that if a neuron m with free frequency ω m is reset in a given orbit, m S, also all other neurons n with ω n ω m must reset in the given orbit, as otherwise condition (20) is violated for l = n at the time where neuron m spikes. Indeed this is a central property of all self-consistent periodic orbits for the considered system type: The k spiking neurons in a given orbit k are always the k neurons with the largest input signal and therefore the largest free frequency. This also implies that if two neurons m and n have the same frequency ω m = ω n, it is not possible to have one neuron spike and the other one be silent, regardless of the coupling strength, restricting the possible options for the number of winners k.

4. Constraint (3): Neurons reach threshold only at correct time

Furthermore, equation system (18) assumes that each neuron σ i S reaches the threshold 1 only once per period, exactly at time t i. Again, as the phase of every neuron has a local maximum just before itself or any of the others spikes, we consider time points t = t j, with j { 1 , , k } and require as an additional set of constraints
(23)
We, therefore, require for each spiking neuron σ i, i { 1 , , k } that k 1 inequalities,
(24)
are satisfied. If any of the in total k ( k 1 ) conditions (24) is violated, the solution { Δ t i } of the periodicity conditions (18) is inconsistent, because at least one of the k fastest neurons reaches the threshold 1 without a reset. Again, solving Eq. (24) analytically or numerically allows to identify values of the inhibition strength ϵ or other system parameters for which constraints (23) are satisfied.
For linear transfer functions H ϵ , i ϕ i, the periodicity conditions (17) and the additional constraints (19), (20), and (23) can be evaluated mostly in a closed form. Hence, in the following we consider linear neurons (i.e., neurons with a linear free time evolution)
(25)
to illustrate our method and illuminate the microscopical mechanisms which lead to transitions between different numbers of winners k.

1. Periodicity of reset events

For linear neuron models, equation system (18) can be written as
(26)
for i { 1 , , k } or equivalently as matrix equation
(27)
with k × k matrix
(28)
and k-component vectors Δ t = ( Δ t 1 , , Δ t k ) and T = ( 1 / ω σ 1 , , 1 / ω σ k ) .
According to Eq. (26), we find
(29)
which can be solved for the time step
(30)
This expression only depends on the free frequency of two neurons: neuron σ i which resets at time t i and neuron σ i + 1 which resets at time t i + 1 = t i + Δ t i. From Δ t i one can reconstruct the phases ϕ m ( t ) of all neurons m { 1 , , N } at arbitrary times (relative to an initial time t 1). Note that the total period
(31)
is the sum over the free periods T σ i of the k spiking neurons T σ i, i { 1 , , k }, with a prefactor depending on ϵ and k. In particular, the order in which the neurons reset does not affect the total period T, as long as the considered reset order σ is consistent with the additional constraints that we evaluate in the following.

2. Constraint (1): Positive time steps

Physically sensible orbits require all time intervals to be positive,
(32)
Consulting formula (30) for the time steps Δ t i, we write this as
(33)
or equivalently
(34)
Which permutations of period-one orbits minimize the right hand side of Eq. (34), and hence are consistent for the lowest choices of the coupling strength? We can rewrite condition (34) as
(35)
where
(36)
is the maximum of the relative frequency difference between successively spiking neurons within the given orbit. Hence, the most stable period-one orbits regarding condition (32) are those which minimize the maximal frequency ratio d. These are orbits in which the k fastest neurons reset in ascending order, regarding their free frequency ω i, which we in the following refer to as standard ordering. Of course, also cyclical permutations of σ lead to the same critical value of ϵ as they represent the same physical dynamics. Furthermore, there can exist other permutations which have the same (but not lower) critical coupling strength ϵ, depending on the specific frequency configuration.

3. Constraint (2): Only k neurons reach threshold

For k < N, we explicitly require that the k + 1-fastest neuron (neuron ) does not reach the threshold at any time,
(37)
For linear neuron dynamics, Eq. (22) for the phases ϕ yields
(38)
for each i { 1 , , k }.
Subtracting ϕ ( t i ) ( 1 ϵ ) k and inserting the periodicity Eq. (26) for neuron σ i yields the condition
(39)
which can be resolved for ϵ,
(40)
The right-hand side takes on its maximum for the neuron σ i = s with the lowest input signal ω s = min j S { ω j } of all spiking neurons, finally giving
(41)

Note that the critical value ϵ min ( 2 ) does not depend on the order in which the neurons spike in the considered orbit so that for ϵ ϵ min ( 2 ), no period-one dynamics with given k are consistent, regardless of the order of spiking neurons. In fact, also non-period-one orbits with sequence length p > k with given k are not possible below the critical value ϵ min ( 2 ), as these require even higher coupling strengths than the period-one dynamics with the same k, compare Fig. 2. As typically ϵ min ( 2 ) ϵ min ( 1 ) (for k < N) and ϵ min ( 2 ) ϵ min ( 3 ) (see the next section), the critical value ϵ min ( 2 ) usually is the lower boundary of the coupling strength for a fixed number of winners k, regardless of the orbit type, see Fig. 3.

FIG. 3.

Discontinuous change of dynamics at critical coupling strengths. Transition between k = 5 and k = 6 for network of N = 8 linear neurons, with equally spaced inputs I i = I 0 + Δ I ( i 1 ). The upper boundary of the coupling strength ϵ max for period-one dynamics with k = 6 is shown as a dashed black line and the lower boundary of the coupling strength ϵ min for period-one dynamics with k = 5 is shown as a solid black line. The latter also corresponds to the transition from k = 5 to k = 6. Passing the critical values of ϵ, the period-one orbits (left reset sequences) become inconsistent and more complex orbits with k = 6 and p > k (right reset sequences) emerge.

FIG. 3.

Discontinuous change of dynamics at critical coupling strengths. Transition between k = 5 and k = 6 for network of N = 8 linear neurons, with equally spaced inputs I i = I 0 + Δ I ( i 1 ). The upper boundary of the coupling strength ϵ max for period-one dynamics with k = 6 is shown as a dashed black line and the lower boundary of the coupling strength ϵ min for period-one dynamics with k = 5 is shown as a solid black line. The latter also corresponds to the transition from k = 5 to k = 6. Passing the critical values of ϵ, the period-one orbits (left reset sequences) become inconsistent and more complex orbits with k = 6 and p > k (right reset sequences) emerge.

Close modal

What happens on a mechanistic level, if one starts with a consistent orbit satisfying condition (37) and continuously decreases the coupling strength, ϵ ϵ min ( 2 ) +? According to Eq. (39), the maximum value of the phase ϕ ( t ) within the orbit σ continuously increases, and formally, at coupling strength ϵ = ϵ min ( 2 ), takes on 1 just at as the k-fastest neuron (neuron s) spikes. However, as the reset pattern σ does not account for a reset of neuron , at ϵ = ϵ min ( 2 ) the periodic orbit σ becomes inconsistent and a discontinuous change to an orbit with k + 1 spiking neurons, in which also neuron spikes, occurs. Typically, the coupling strength ϵ = ϵ min ( 2 ) is still too high for period-one dynamics with the new number of winners k + 1, so that we observe a more complex orbit with p > k, in which faster neurons spike multiple times, while two or more slower neurons take turns. For a typical example, see Fig. 3.

4. Constraint (3): Neurons reach threshold only at correct time

Finally, we evaluate the third condition
(42)
which ensures that each of the k fastest neurons σ i S reaches the threshold only once per period, exactly at relative event time t i as defined by Eq. (30). Analogous to Eq. (24), for the phases of neuron σ i, i { 1 , , k } at times t j, where j 1, we find
(43)
where μ = j i mod k denotes the number of reset events between t i and t j. For clarity of presentation, we skip any indices of μ although it depends on j, i, and k.
Inserting the explicit expression (30) for the time steps yields
(44)
which we substitute in condition (42) to get
(45)
which must be satisfied for all i , j { 1 , , k } , i j.

As we are interested in values of the coupling strength ϵ for which condition (45) is satisfied for all tuples i , j { 1 , , k } , j i, for given k, { ω i }, and spiking order σ we have to check condition (45) only for the tuple ( i , j ) that maximizes ϕ j ( t i ).

Which ordering of neurons represents the most stable period-one orbit fulfilling condition (45), that is, gives the smallest maximum phase ϕ j ( t i )? Considering Eq. (44) again, we note that ϕ j ( t i ) is larger for larger μ as well as for larger ratios ω σ i / ω σ j. We find the most stable reset pattern permutation to be the one in which all neurons reset in ascending order of their free frequencies and then start with the slowest neuron again (standard ordering), see Fig. 3. This is because for the ascending patterns, the smaller the number of reset events μ from i to j, the larger the ratio ω σ i / ω σ j becomes such that both contributions compete. Hence, in order to identify values of ϵ where any type of period-one dynamics with given k is possible, we consider only orbits with ascending free frequencies.

In  Appendix C, we show that, for any period-one orbits with standard ascending ordering, if Eq. (45) is satisfied for μ = k 1, it also holds for any other μ { 1 , , k 2 }, so that we have to consider only μ = k 1 for evaluating condition (45). The interpretation is that the first violation of the condition (45) happens in terms of some neuron i reaching threshold exactly one event “too early.”

Taken together, we end up with the condition
(46)
with d = max ( ω σ i + 1 / ω σ i ). This can be solved for the range of acceptable coupling strength ϵ via standard procedures for root finding. We generally find a simple interval ( ϵ min ( 3 ) , ϵ max ( 3 ) ) [ 0 , 1 ], however it might be empty for the case that no period-one dynamics with a given k are possible for a specific input configuration. For k < N, we typically find ϵ min ( 3 ) < ϵ min ( 2 ), so that condition (46) is relevant only for defining an upper boundary ϵ max ( 3 ) for the coupling strength.

What happens at the point where condition (46) is violated? For ϵ ϵ max ( 3 ) , the phase of at least one neuron σ i S starts to reach the threshold 1 at some time t j which does not correspond to the proper reset event s i as defined by the mapping σ. Depending on the input configuration and system details, at ϵ = ϵ max ( 3 ) either a more complicated orbit with p > k, but the same number of winners k, occurs, or the systems dynamics changes directly to period-one dynamics with a new number of winners k + 1. However, for k > 2, we usually observe the first case. For a typical example, consider Fig. 3.

5. Combining all constraints

For identifying the choices of the coupling strength ϵ which give rise to period-one dynamics with a given number of winners k, one calculates for given input configuration ω i := I 0 + I i, i { 1 , , N } the ranges in which all sets of constraints (1), (2), and (3) are satisfied. As for constraint (2) the order of reset events has no effect and constraints (1) and (3) favor ascending ordering of the free frequencies, in order to find the parameter choices where any period-one dynamics with given k exist, we have to consider only the permutations in which the k fastest neurons spike in ascending order with respect to their free frequencies. For these orbits, constraints (1) and (2) both give a single minimum value for ϵ, while condition (2) generally gives a minimum and a maximum value. Also, for these orbits, constraint (3) automatically guarantees constraint (1) (see  Appendix D for a derivation), so that for a given k, we set as the range of fitting coupling strengths
(47)
Note that this interval might be empty—in this case for the given input configuration, there are no period-one dynamics with a given k.

Figure 4 illustrates the identification of parameter regions with period-one dynamics for linear neurons with equally spaced input signals I i = ( i 1 ) Δ I, for different choices of the input spacing parameter Δ I. Figure 4(a) shows both simulation results as well as the derived boundaries for period-one dynamics for different numbers of winners k. Indeed, the analytical boundaries are consistent with the simulation results. Figure 4(b) focuses on a smaller parameter region with k { 5 , 6 , 7 }, illustrating the transition between different k. While the lower boundary of the coupling strength ϵ for period-one dynamics with given k (solid black line) corresponds also to a transition to a larger value of k, the upper boundary of the coupling strength (dashed black line) leads to more complicated orbits with p > k, but with the same number of winners. For k < N, we find ϵ min = ϵ min ( 2 ) throughout.

FIG. 4.

Analytical parameter identification for robust k-WTA computations with the linear neuron model. (a) Exemplary orbits for different types of dynamics. The first orbit with neurons resetting in ascending order regarding their free frequencies (standard ordering) is found to exist in wider regions of the parameter space than other permutations (second orbit) with p = k. The third orbit represents a typical reset sequence with p > k. (b) The input configuration of the network consisting of N = 8 linear neurons [Eq. (15)] is given by I i = y i Δ I with y i = i 1 and varied input spacing parameter Δ I. The constant offset current is chosen as I 0 = 1. (c) and (d) Upper and lower boundary of the coupling strengths ϵ for period-one dynamics ( p = k ) with given number of winners k, as a function of the input difference Δ I. Generally, the lower bound (solid line) is given by ϵ min ( 2 ) and the upper bound by ϵ max ( 3 ). Colors denote the number of winners as found by numerical simulations of the system starting from 100 different random initial conditions, with opaque colors representing period-one dynamics and semi-transparent colors representing dynamics with p > k reset events per orbit with the same k.

FIG. 4.

Analytical parameter identification for robust k-WTA computations with the linear neuron model. (a) Exemplary orbits for different types of dynamics. The first orbit with neurons resetting in ascending order regarding their free frequencies (standard ordering) is found to exist in wider regions of the parameter space than other permutations (second orbit) with p = k. The third orbit represents a typical reset sequence with p > k. (b) The input configuration of the network consisting of N = 8 linear neurons [Eq. (15)] is given by I i = y i Δ I with y i = i 1 and varied input spacing parameter Δ I. The constant offset current is chosen as I 0 = 1. (c) and (d) Upper and lower boundary of the coupling strengths ϵ for period-one dynamics ( p = k ) with given number of winners k, as a function of the input difference Δ I. Generally, the lower bound (solid line) is given by ϵ min ( 2 ) and the upper bound by ϵ max ( 3 ). Colors denote the number of winners as found by numerical simulations of the system starting from 100 different random initial conditions, with opaque colors representing period-one dynamics and semi-transparent colors representing dynamics with p > k reset events per orbit with the same k.

Close modal

We point out that for linear neuron dynamics a closed-form solution for the time steps Δ t i, i { 1 , , p } is not only possible for period-one dynamics, but also for arbitrary complex dynamics with p > k, see  Appendix B.

For nonlinear free (uncoupled) neuron dynamics, or more specifically, free neuron models which lead to nonlinear transfer functions, the constraints can generally not be solved in closed form. However, the corresponding equations can be solved for the critical coupling strengths ϵ easily by using standard methods for root finding. As such nonlinear neuron dynamics are commonly used for modeling biological systems, as well as in engineering application, we briefly describe the procedure, which treats each set of constraints (1), (2), and (3) [Eqs. (19), (20), and (23)] separately.

For constraint (1) (no negative time steps), the critical coupling strength is identified by solving the equation system consisting of the k periodicity conditions (17) and
(48)
for the free variables { Δ t i }, i { 1 , , k } and ϵ ( 0 , 1 ).
For constraint (2) (no neuron l S ¯ reaches threshold), for a fixed i, we require ϕ ( t i ) = 1 in Eq. (22),
and solve the system that consist of this equation and the periodicity conditions (18) of the spiking neurons, for independent variables { Δ t j }, j { 1 , , k }, and ϵ ( 0 , 1 ). In general, this has to be done for each i and the maximum value of ϵ must be identified; however for period-one dynamics, we find that the relevant value of ϵ is always found for i = m where σ m = a r g m a x i S ω i is the spiking neuron with the lowest frequency.
For constraint (3) (neurons reach threshold only at reset position as defined by σ), we consider the phases of all spiking neurons σ i S, ϕ σ i ( t j ), at times t j { 1 , , k }, j i. As we are interested in the choices of ϵ where at least one of these phases reaches 1, we require
(49)
which together with the k periodicity Eq. (18) forms an equation system with k + 1 unknowns, { Δ t m }, m { 1 , , k }, and ϵ. Generally, solutions exist for multiple choices of ϵ, so that special care has to be taken to identify all solutions in ( 0 , 1 ).
We find the microscopical mechanisms for the transitions between different orbit types to be qualitatively the same as for linear neurons (cf. Fig. 3). In general, the overall structure of the dynamical space is similar to the results for linear dynamics, with regions exhibiting period-one dynamics separated by intermediary regions with more complex dynamics. Figure 5 confirms the analytically obtained boundaries for period-one dynamics with different k with simulation results, for different input signal distributions,
(50)
While for equally spaced signals, y i = i 1, the transition regions with p > k are comparably small (as a function of the coupling strength ϵ and the input spacing parameter Δ I), see Figs. 5(a)5(c), for only approximately equally spaced input signals, the transitional regions (lighter color) take up a larger portion of the parameter space, see Figs. 5(d)5(f). In Figs. 5(g) and 5(i), we further investigate the widening of transition stripes induced by non-equally spaced input signals, by deliberately choosing two inputs very close together. The main effect lies in the k = 4 region extending significantly towards higher values of the coupling strength ϵ, with a noticeable larger region with p > k = 4 and a diminished period-one dynamics region for k = 3. As the input of neurons 5 and 6 are similar in amplitude (relative to the others) indeed it is intuitive that the choices of ϵ which allow only neuron 5, but not neuron 6 to spike are comparatively limited. However, also regions with other k { 3 , 4 } are affected slightly, because the maximum coupling strength ϵ max ( 3 ) for which the constraint (3) is satisfied is lowered. In Sec. V D, we illustrate how in such scenarios of unevenly distributed input signals also dynamical regions with p > k may be effectively exploited for computational uses.
FIG. 5.

Analytical parameter identification for robust k-WTA computations with nonlinear neuron models. Upper and lower boundary of the coupling strength ϵ for period-one dynamics with given number of winners k, as a function of the input difference Δ I. Colors denote the number of winners as found by numerical simulations of the system. Opaque colors represent period-one dynamics and lighter colors represent dynamics with p > k reset events per orbit. (a), (d), and (g) Relative input signals strengths y i of the input signals I i = y i Δ I. (b), (e), and (h) Dynamical regions as a function of Δ I , ϵ (c), (f), and (i) Zoom in on a specific part of the parameter space. (a)–(c) System of N = 8 leaky integrate-and-fire neurons [Eq. (3), with with I 0 = 1, γ = 0.95], with equally spaced input signals ( y i = i 1). In comparison to the linear case, the period-one regions are spread out more evenly across different coupling strengths ϵ and the regions with p > k are significantly thinner. (d)–(f) The same system with only approximately equally spaced input signals, effecting in larger regions with p > k. (g)–(i): The same system with two inputs deliberately chosen closely together. While this shifts the boundaries of most period-one regions, the main effect is a significant extension of p > k-regions with k = 4 towards higher coupling strength, diminishing period-one regions with k = 3.

FIG. 5.

Analytical parameter identification for robust k-WTA computations with nonlinear neuron models. Upper and lower boundary of the coupling strength ϵ for period-one dynamics with given number of winners k, as a function of the input difference Δ I. Colors denote the number of winners as found by numerical simulations of the system. Opaque colors represent period-one dynamics and lighter colors represent dynamics with p > k reset events per orbit. (a), (d), and (g) Relative input signals strengths y i of the input signals I i = y i Δ I. (b), (e), and (h) Dynamical regions as a function of Δ I , ϵ (c), (f), and (i) Zoom in on a specific part of the parameter space. (a)–(c) System of N = 8 leaky integrate-and-fire neurons [Eq. (3), with with I 0 = 1, γ = 0.95], with equally spaced input signals ( y i = i 1). In comparison to the linear case, the period-one regions are spread out more evenly across different coupling strengths ϵ and the regions with p > k are significantly thinner. (d)–(f) The same system with only approximately equally spaced input signals, effecting in larger regions with p > k. (g)–(i): The same system with two inputs deliberately chosen closely together. While this shifts the boundaries of most period-one regions, the main effect is a significant extension of p > k-regions with k = 4 towards higher coupling strength, diminishing period-one regions with k = 3.

Close modal

Depending on the distribution of the input signals I i, the range of choices of the coupling strength ϵ which lead to period-one dynamics might be significantly smaller than for equally spaced inputs. Hence, it can be advantageous to also consider more complicated orbits with reset sequence lengths p > k for computational applications. The analytical description of period-one orbits and the derivation of consistency constraints as demonstrated in Sec. V A can be generalized in a straightforward way also to arbitrary reset patterns. Furthermore, for linear neuron models, also for p > k a closed-form solution for the time intervals Δ t i is possible. While for the mathematical description we refer to the appendix, here we briefly illustrate results for a specific class of more complex orbits with p = 2 ( k 1 ), where the ( k 2 ) fastest neurons reset twice per period, while the ( k 1 ) and k fastest neuron reset only once, see Fig. 6(a). In the considered system class, these represent one of the most common more complex orbit types.

FIG. 6.

Analytical identification of parameter regions for more complex orbits. (a) Two typical orbits for more complex dynamics, with sequence length p = 2 ( k 1 ). We find the first permutation, where the ( k 2 ) fastest neurons spike in ascending order regarding their free frequencies (standard ordering) in wider regions of the parameter space, in particular, directly at transitions. (b) and (e) Relative input signal strength y i defines the input configuration I i = y i Δ I configuration for network consisting of N = 8 nonlinear neurons [Eq. (3) with I 0 = 1, γ = 0.95] with constant offset I 0 = 1 and varied input spacing parameter Δ I. (c), (d), (f), and (g) Upper boundaries (dashed line) and lower boundaries (solid line) of the coupling strength ϵ for dynamics with p = 2 ( k 1 ) for different numbers of winners k, as a function of the input difference Δ I. We confirm our analytical predictions by showing the number of winners k as found by direct simulation of the dynamical system, with opaque colors representing regions with p = 2 ( k 1 ) and transparent colors regions with p 2 ( k 1 ). (b)–(d) System of N = 8 nonlinear neurons with approximately equally spaced input signals, inducing large regions with sequence length p = 2 ( k 1 ), which are exactly identified by our approach. (e)–(g) The same system with two inputs chosen very closely. Note that while some regions with p = 2 ( k 1 ) are very small, in particular, for k = 3 considering also non-period one dynamics for computation might be advantageous. [Throughout, we excluded p = 2 ( k 1 ) for k = 2, as these orbits are more naturally described as period-one dynamics.]

FIG. 6.

Analytical identification of parameter regions for more complex orbits. (a) Two typical orbits for more complex dynamics, with sequence length p = 2 ( k 1 ). We find the first permutation, where the ( k 2 ) fastest neurons spike in ascending order regarding their free frequencies (standard ordering) in wider regions of the parameter space, in particular, directly at transitions. (b) and (e) Relative input signal strength y i defines the input configuration I i = y i Δ I configuration for network consisting of N = 8 nonlinear neurons [Eq. (3) with I 0 = 1, γ = 0.95] with constant offset I 0 = 1 and varied input spacing parameter Δ I. (c), (d), (f), and (g) Upper boundaries (dashed line) and lower boundaries (solid line) of the coupling strength ϵ for dynamics with p = 2 ( k 1 ) for different numbers of winners k, as a function of the input difference Δ I. We confirm our analytical predictions by showing the number of winners k as found by direct simulation of the dynamical system, with opaque colors representing regions with p = 2 ( k 1 ) and transparent colors regions with p 2 ( k 1 ). (b)–(d) System of N = 8 nonlinear neurons with approximately equally spaced input signals, inducing large regions with sequence length p = 2 ( k 1 ), which are exactly identified by our approach. (e)–(g) The same system with two inputs chosen very closely. Note that while some regions with p = 2 ( k 1 ) are very small, in particular, for k = 3 considering also non-period one dynamics for computation might be advantageous. [Throughout, we excluded p = 2 ( k 1 ) for k = 2, as these orbits are more naturally described as period-one dynamics.]

Close modal

In order to find all choices of ϵ which lead to a dynamic with sequence length p = 2 ( k 1 ) for given k, in principle we have to check all permutations of the pattern as shown in Fig. 6(a) for consistency. However, similar to the period-one case, we find that patterns in which the fastest ( k 2 ) neurons spike in ascending order regarding their frequency are the most stable ones, so that only their consistency has to be checked explicitly. Figure 6 illustrates the analytical prediction of p = 2 ( k 1 ) dynamics for the same input configurations as in Figs. 5(d)5(i). Apparently, while unevenly or closely spaced input signals might widen the transition stripes with p > k and reduce the range of coupling strengths ϵ that correspond to period-one dynamics, by considering also dynamics with p = 2 ( k 1 ) an effective re-calibration is still possible. In the same manner, even more complicated dynamics can be explicitly identified, potentially opening up the whole parameter space for computational uses.

The question of how spiking neural networks can compute in resilient and reconfigurable ways constitutes a general open problem. Here, we have explored a recent implementation of k-WTA computations, basic computational tasks that can be assembled to perform universal forms of computations, and studied their capabilities for performing resilient and reconfigurable computations. Transferring the standard phase description for oscillatory neurons to multiplicative interactions, we derived analytical conditions for the parameter regions where specific basic periodic spike sequences exist. Our results identify regions in the parameter space of coupling strengths and signal strength differences where the system can perform k-WTA tasks for different given k. Furthermore, we illuminate the mechanisms underlying the transition between regions with different dynamics and thus computational options. Going beyond basic periodic spike sequences, we also provide a general description for arbitrary periodic spike sequences and demonstrate how exploiting more complicated orbits may allow for a wider choice of coupling strengths, potentially making the computation more resilient. While in this work we illustrate our results for (approximately) equally spaced input signals with known input differences, the proposed method works for any type of input configuration. In particular, generalization to input vectors drawn from specific random distributions should be addressed in future work to address the open question of how to ensure robust computations also when there is only moderate knowledge about the range of expected input signals.

This research was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Grant No. 419424741 and co-financed with tax funds on the basis of the budget adopted by the Saxon State Parliament through a TG70 grant (No. 10040011) (TransparNet).

The authors have no conflicts to disclose.

Georg Börner: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Resources (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Fabio Schittler Neves: Conceptualization (equal); Resources (equal); Writing – original draft (equal); Writing – review & editing (equal). Marc Timme: Conceptualization (lead); Funding acquisition (lead); Resources (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

While in the main manuscript we focus on period-one dynamics, in which each neuron resets at most once per orbit period, in the following we extend our description to arbitrary reset patterns. Again we start by stating periodicity conditions and afterwards introduce three different types of additional constraints given in terms of inequalities.

1. Periodicity

Consider an arbitrary orbit with reset sequence length p, defined by a mapping σ : { 1 , , p } { 1 , , N }, such that neuron σ i, i { 1 , , p } is the neuron that resets at time t i, ϕ σ i ( t i ) = 1 and ϕ σ i ( t i ) 0. Now let l p be the number of reset events until neuron σ i resets again in the given orbit, after resting at time t i: σ i + l = σ i, ϕ σ i ( t i + l ) = 1, which leads to the periodicity conditions
(A1)
Note that although we do not use an extra index, l in general depends on i. Starting from ϕ σ i ( t i ) = 0, we successively find expressions for the phase ϕ σ i at later times, up until ϕ σ i ( t i + l ) 1,
(A2)
Taken together, we have a system of p nonlinear equations of the form (A2), one for each reset event s i of neuron σ i, with p unknowns Δ t i.

2. Constraint (1): Positive time steps

As for period-one orbits, also for arbitrary orbits with p k we have to ensure that the time steps Δ t i are positive,
(A3)
which gives a set of p additional constraints on the periodicity conditions, or equivalently, a single additional condition
(A4)
which can be evaluated analogously to the case with period-one dynamics.

3. Constraint (2): Only k neurons reach threshold

If within a specific orbit as encoded by σ less than N different neurons are spiking, k < N, we have to explicitly ensure that the N k 1 silent neurons l S ¯ indeed always stay below the threshold 1. Due to the monotonicity of the rise functions U l ( ϕ l ), phase variables ϕ l always have a local maximum just before another neuron σ i, i { 1 , , j }, reaches threshold at time t i, so that we require
(A5)
where we have to consider only the phase ϕ i of the fastest neuron S ¯ which is not supposed to be spiking. For each choice of i { 1 , , p }, we use the periodicity of neuron , ϕ ( t i + k ) = ϕ ( t i ), to find
(A6)
which gives the phase ϕ ( t i ) as a function of itself and the time steps Δ t j, j { 1 , , p }, which are provided as the solution of the periodicity conditions (A2). Note that formally, the resulting condition is exactly the same as for period-one dynamics, compare Eq. (22). Again, solving Eq. (A6) analytically or numerically, one can identify values of the inhibition strength ϵ for which constraint (A5) is satisfied.

4. Constraint (3): Neurons reach threshold only at correct time

Equation system (A1) assumes that each neuron m S reaches the threshold 1 only at times t j, with σ j = m. Again, as the phase of every neuron has a local maximum just before itself or any of the others spikes, we consider only time points t = t j, with j { 1 , , p } and state as an additional set of constraints
(A7)
We, therefore, require for each i { 1 , , p } that
(A8)
where l is the number of resets it takes for the specific neuron σ i to reset again after time t i, at time t i + l (again, l generally depends on i). For all choices of i { 1 , , p } together, we get a total of p ( k 1 ) equations. Very similar to period-one dynamics, by examining Eq. (A8) analytically or numerically, one can identify values of the inhibition strength ϵ for which the constraints are satisfied for all i , j.

1. Periodicity

Again, for linear neurons, we can solve the periodicity conditions (A2) in a closed form for the time steps Δ t i, i { 1 , , p },
(B1)
where we use α = 1 ϵ ( 0 , 1 ) and h m ( i ) is a polynomial of α, with exponents which correspond to the distance (in number of reset events) to former reset events of neuron m, seen from event s i at time t i,
(B2)
where δ σ i n , m denotes the Kronecker-delta.
For example, for the reset sequence ( 1 , 3 , 2 , 3 ), we get
(B3)
for neuron m = 1 and
(B4)
(B5)
for neuron m = 3. For the same orbit, we get for the timesteps Δ t 1 and Δ t 2
(B6)
(B7)
(B8)
(B9)
(B10)
(B11)

2. Constraint (1): Positive time steps

Constraint (1)
(B12)
can be directly evaluated by using the solutions for the time steps (B1).

3. Constraint (2): Only k neurons reach threshold

Employing the expression (B1) for the time steps, for constraint (2) [Eq. (A6)] we get
(B13)
where is the index of the neuron with the k + 1 largest frequency. In the case of period-one dynamics with sequence length p = k, this can be solved directly for ϵ,
(B14)
compare Eq. (41) in the main part.

4. Constraint (3): Neurons reach threshold only at correct time

For constraint (3), we get for the phases ϕ σ i ( t j )
(B15)
where μ = j i mod p. For period-one orbits with p = k, this simplifies to
(B16)
Requiring ϕ i ( t j ) < 1, we get
(B17)
for arbitrary orbits and
(B18)
for period-one dynamics.
Introducing α = 1 ϵ ( 0 , 1 ), we want to show that from
(C1)
follows
(C2)
for all μ { 2 , , k 1 }, or equivalently,
(C3)
where δ = k μ { 2 , , k 1 } and k > 2. The interpretation is that in period-one orbits in which the k fastest neurons spike in standard (ascending) order regarding their intrinsic frequencies, constraint (3) [Eqs. (C2), (C3)] is always violated first for μ = k 1, or equivalently δ = 1, which means that some neuron i reaches threshold exactly one event to early, at time t i 1 instead of time t i.
First, we define
(C4)
For period-one orbits with ascending ordering of neuron frequencies, ω σ 0 < ω σ 1 < < ω σ k, we have
(C5)
(C6)
Together with
(C7)
[which follows directly from the assumption (C1)] we find
(C8)
Hence,
(C9)
so that to conclude (C3) we can show
(C10)
or equivalently,
(C11)
for all δ { 2 , , k 1 }. We employ natural induction, first showing the inequality (C11) for δ = 2:
(C12)
(C13)
We rearrange the terms to get
(C14)
and
(C15)
Division by 1 α k > 0 gives
(C16)
(C17)
(C18)
(C19)
(C20)
(C21)
(C22)
Hence, Eq. (C11) is satisfied for δ = 2.
Next we show that from
(C23)
follows
(C24)
for all δ { 3 , k 1 }.
We multiply (C23) with ( 1 α k + α k 1 ) > 0 to get
(C25)
To prove Eq. (C24), now we have to show that the right-hand side of Eq. (C25) is smaller than or equal to the right-hand side of (C24),
(C26)
which we arrange as
(C27)
and
(C28)
Division by 1 α > 0 gives
(C29)
(C30)
(C31)
(C32)
(C33)
(C34)
Hence, we showed that from Eq. (C23) follows Eq. (C24) (induction step) so that together with equation (C12) (induction start) follows the inequality (C11) and conclusively our claim.
We show that, for linear neurons, from constraint (3) for period-one orbits with ascending ordering,
(D1)
with d = max ( ω σ i + 1 / ω σ i ) > 0, compare Eq. (46), follows constraint (1) for the same type of orbits,
(D2)
compare Eq. (35). First, we rewrite Eqs. (D1) and (D2) with α = 1 ϵ ( 0 , 1 ) and claim
(D3)
(D4)
To show this, from Eq. (D3) we find
(D5)
using that ( 1 α k + α k 1 ) > 1. We now conclude (D4) by showing that the left-hand side of (D5) is larger than α,
(D6)
(D7)
(D8)
(D9)
(D10)
where we used that α ( 0 , 1 ).
1.
W.
Bridewell
and
P. F.
Bello
, “
A theory of attention for cognitive systems
,”
Adv. Cogn. Syst.
4
(1),
1–16
(
2016
).
2.
J. L.
McKinstry
,
J. G.
Fleischer
,
Y.
Chen
,
W. E.
Gall
, and
G. M.
Edelman
, “
Imagery may arise from associations formed through sensory experience a network of spiking neurons controlling a robot learns visual sequences in order to perform a mental rotation task
,”
PLoS One
11
,
1
23
(
2016
).
3.
M.
Rabinovich
,
I.
Tristan
, and
P.
Varona
, “
Neural dynamics of attentional cross-modality control
,”
PLoS One
8
,
3655961
(
2013
).
4.
W.
Maass
, “
On the computational power of winner-take-all
,”
Neural Comput.
12
,
2519
2535
(
2000
).
5.
F. S.
Neves
and
M.
Timme
, “
Computation by switching in complex networks of states
,”
Phys. Rev. Lett.
109
,
018701
(
2012
).
6.
Y.
Chen
, “
Mechanisms of winner-take-all and group selection in neuronal spiking networks
,”
Front. Comput. Neurosci.
11
,
20
(
2017
).
7.
C. R.
Laing
and
C. C.
Chow
, “
Stationary bumps in networks of spiking neurons
,”
Neural Comput.
13
,
1473
1494
(
2001
).
8.
Y.
Sandamirskaya
, “
Dynamic neural fields as a step toward cognitive neuromorphic architectures
,”
Front. Neurosci.
22
,
276
(
2014
).
9.
M.
Krupa
, “
Robust heteroclinic cycles
,”
J. Nonlinear Sci.
7
,
129
176
(
1997
).
10.
M.
Rabinovich
,
A.
Volkovskii
,
P.
Lecanda
,
R.
Huerta
,
H.
Abarbanel
, and
G.
Laurent
, “
Dynamical encoding by networks of competing neuron groups winnerless competition
,”
Phys. Rev. Lett.
87
,
068102
(
2001
).
11.
M.
Timme
,
F.
Wolf
, and
T.
Geisel
, “
Prevalence of unstable attractors in networks of pulse-coupled oscillators
,”
Phys. Rev. Lett.
89
,
154105
(
2002
).
12.
M.
Timme
,
F.
Wolf
, and
T.
Geisel
, “
Unstable attractors induce perpetual synchronization and desynchronization
,”
Chaos
13
,
377
387
(
2003
).
13.
P.
Ashwin
and
J.
Borresen
, “
Encoding via conjugate symmetries of slow oscillations for globally coupled oscillators
,”
Phys. Rev. E
70
,
026203
(
2004
).
14.
P.
Ashwin
and
J.
Borresen
, “
Discrete computation using a perturbed heteroclinic network
,”
Phys. Lett. A
374
,
208
214
(
2005
).
15.
J.
Wordsworth
and
P.
Ashwin
, “
Spatiotemporal coding of inputs for a system of globally coupled phase oscillators
,”
Phys. Rev. E
78
,
066203
(
2008
).
16.
J. J.
Wang
,
Q.
Yu
,
S. G.
Hu
,
Y.
Liu
,
R.
Guo
,
T. P.
Chen
,
Y.
Yin
, and
Y.
Liu
, “
Winner-takes-all mechanism realized by memristive neural network
,”
Appl. Phys. Lett.
115
,
243701
(
2019
).
17.
M. I.
Rabinovich
,
P.
Varona
,
A. I.
Selverston
, and
H. D.
Abarbanel
, “
Dynamical principles in neuroscience
,”
Rev. Mod. Phys.
78
,
1213
(
2006
).
18.
F. S.
Neves
,
M.
Voit
, and
M.
Timme
, “
Noise-constrained switching times for heteroclinic computing
,”
Chaos
27
,
033107
(
2017
).
19.
F. S.
Neves
and
M.
Timme
, “
Reconfigurable computation in spiking neural networks
,”
IEEE Access
8
,
179648
179655
(
2020
).
20.
G.
Börner
,
F. S.
Neves
, and
M.
Timme
, “Equivalence of additive and multiplicative coupling in spiking neural networks,” arXiv2304.00112 (2023).
21.
R. E.
Mirollo
and
S. H.
Strogatz
, “
Synchronization of pulse-coupled biological oscillators
,”
SIAM J. Appl. Math.
50
,
1645
1662
(
1990
).
22.
M.
Timme
and
F.
Wolf
, “
The simplest problem in the collective dynamics of neural networks is synchrony stable
,”
Nonlinearity
21
,
1579
1599
(
2008
).
23.
M.
Timme
,
F.
Wolf
, and
T.
Geisel
, “
Coexistence of regular and irregular dynamics in complex networks of pulse-coupled oscillators
,”
Phys. Rev. Lett.
89
,
258701
(
2002
).
24.
R.-M.
Memmesheimer
and
M.
Timme
, “
Designing the dynamics of spiking neural networks
,”
Phys. Rev. Lett.
97
,
188101
(
2006
).
Published open access through an agreement with TU Dresden