Networks of spiking neurons constitute analog systems capable of effective and resilient computing. Recent work has shown that networks of symmetrically connected inhibitory neurons may implement basic computations such that they are resilient to system disruption. For instance, if the functionality of one neuron is lost (e.g., the neuron, along with its connections, is removed), the system may be robustly reconfigured by adapting only one global system parameter. How to effectively adapt network parameters to robustly perform a given computation is still unclear. Here, we present an analytical approach to derive such parameters. Specifically, we analyze -winners-takes-all ( -WTA) computations, basic computational tasks of identifying the largest signals from a total of input signals from which one can construct any computation. We identify and characterize different dynamical regimes and provide analytical expressions for the transitions between different numbers of winners as a function of both input and network parameters. Our results thereby provide analytical insights about the dynamics underlying -winner-takes-all functionality as well as an effective way of designing spiking neural network computing systems implementing disruption-resilient dynamics.
Robustness against disruptions constitutes a key requirement for engineered information processing systems. We here discuss how networks of spiking neurons can perform effective and resilient computing. Specifically, we explore networks of symmetrically connected inhibitory neurons that can implement basic computations in a way that is resilient to system disruption such as the complete loss of a neuron. We focus on a specific type of computation called “ -winners-takes-all” ( -WTA) that identifies the largest signals out of a total of input signals. Several -WTA circuits may be combined to realize arbitrary computations. We provide an analytical approach to derive parameters that define regions of robust computation for given and enable the system to effectively adapt after a disruption. Our results provide analytical insights about the dynamics underlying this type of computation and offer one way of designing spiking neural network computing systems that are disruption-resilient.
I. BACKGROUND
How can spiking neural networks compute in ways that are robust against external disruptions such as the loss of individual components and how can they be reconfigured to compensate for disruptions? We here study these questions for basic computational tasks known as -winner-take-all computations that are generalizations of rank ordering and that may be combined to yield universal forms of computations.
Rank ordering of signals by their (average) strength is a fundamental computational operation, particularly useful in attention-related tasks in both natural and artificial systems.1–3 It provides an effective way for extracting the most important information from high-dimensional input spaces by simply ordering in terms of relevance (or strength). However, such operations are computationally costly for high-dimensional inputs, and may retain, at times, a large amount of irrelevant information (depending on the application). A less costly, but closely related, operation is called partial rank ordering, in which a subset of the strongest out of signals is identified. Often such operations are referred to as -winner-takes-all ( -WTA) computations. Still, already for fixed , combinatorially many computational outputs (results of the task) need to be accounted for by any system that performs -WTA computations and the number of “winners” might be variable in addition.
Besides the mathematical analysis of -WTA computations, a variety of systems have been proposed for their implementation. Particularly, partial rank ordering can be performed by neuron networks.4,5 For example, to date, there exist multiple bio-inspired implementations of one-winner-takes-all ( -WTA) functionality.6–8 More general -WTA operations can be performed by exploiting complex periodic orbits in symmetrical oscillator networks, similar to heteroclinic computing.5,9–15 Furthermore, simple hardware implementation of a neural-circuit performing WTA-calculations has been recently suggested,16 exhibiting a mixture of excitatory and inhibitory couplings. However, with all these approaches, either the number of winners is fixed or not easily reconfigurable, or computations typically take a long time.5,10–12,14,15,17,18
Recently, a fast and re-configurable -WTA implementation via a symmetric neural network with inhibitory pulse coupling was proposed.19 Via adjusting a single parameter, the global coupling strength, the number of winners can be chosen freely and hence be adapted to different scenarios. In contrast to most existing -WTA implementations, the system is made up from very simple, identical parts and usually converges only within a few spikes. The network presented in Ref. 19 is a computational application of a specific multiplicative coupling scheme in which the effect of an incoming pulse depends linearly on the state of the receiving oscillator. Although closely connected to more common coupling types where the strength of the inhibition does not depend on the oscillator voltage,20 this multiplicative coupling is the most natural way to achieve the specific type of phase compression which is necessary for the -WTA functionality.
In this article, we analytically characterize the mechanisms of -WTA computations based on proportional inhibition. We adapt the original Mirollo–Strogatz formalism21 to multiplicative pulse-coupling to analytically describe the dynamics of the -WTA network proposed in Ref. 19 and explore under which conditions it exhibits the desired computational features. In particular, we identify parameter regions in which the system may perform specific computations and provide analytic expressions for the appropriate coupling strength as a function of external variables. In contrast to the seminal work,19 which uses “brute-force” parameter scans and numerical integration, the current work addresses the question of how to set the global coupling strength in order to obtain collective dynamics with a specified number of firing neurons with a more analytical approach, hence potentially allowing for a more direct (re-)calibration and thus reconfiguration of the computational network.
II. k-WTA COMPUTATIONS VIA INHIBITORY PULSE-COUPLING
It has been shown that symmetrical networks of such neuronal units may be used to perform -WTA calculations.19 An idealized diagram of a complete computing unit is shown in Fig. 1(a). In this context, the variables [see Eq. (2)] are taken as input signals, which determine the free dynamics and period lengths of the corresponding neurons, see Fig. 1(b). The reset pulses of the system not only mediate the global coupling, but also provide the output of the computational system: After setting the input signals , the network converges to a collective dynamic in which only the fastest neurons reset, leaving the other outputs silent. The underlying mechanism can be summarized as follows. Whenever some neuron resets, the down-concavity of , together with the proportional coupling as described by Eq. (5) leads to a relative compression of the voltages of all neurons , see Fig. 1(c) (for details, see Ref. 19). If the differences in input signals and the global coupling strength are both sufficiently large, neurons with shorter free period may overtake other neurons repeatedly, keeping them from reaching the threshold value altogether by recurrently inhibiting them, see Fig. 1(d). After a (typically short) transient, the system enters a periodic orbit, in which only the fastest out of all neurons reset and send pulses, while the neurons which correspond to the – smallest input signals stay silent. Within any periodic orbit, the spike patterns of the individual neurons are interpreted as output vector by discriminating only between spiking and non-spiking neurons, so that the -dimensional, real-valued input vector is mapped to a binary -dimensional vector consisting of ones and – zeros, identifying the highest and – lowest input signals. The exact number of spiking neurons depends on the particular choice of input signals , as well as on the global coupling strength .
As outlined in Ref. 19, the considered spiking neural networks can be reconfigured to different required numbers of winners and input signal differences by adapting one global parameter (the coupling strength) after, for example, the loss of a neuron (and its connections). Under some conditions on the input signal vector and the system parameters, the disrupted and reconfigured network is then capable of performing the same ( -WTA) computation as the original non-disrupted network.
III. NUMERICAL CHARACTERIZATION OF DYNAMICAL REGIONS
Figure 2(a) shows the number of winners as a function of the input spacing parameter and the coupling strength . For each choice of and , the collective dynamics is evaluated, starting from 100 different random initial conditions. Throughout, we find the number of winners to be independent from the specific initial conditions, so that the parameter space consists of cohesive regions with unique . In particular, we find that a wide range of parameter configurations leads to period-one dynamics. In these, every neuron spikes at most once, so that the sequence length of the resulting periodic orbit (i.e., the number of reset events per orbit) is equal to the number of spiking neurons, . However, period-one dynamics with different are usually divided by transitional regions in which neurons spike more than once per period, , see the close-up in Fig. 2(b).
In general, the exact structure of the dynamical space might depend strongly on system parameters and on the range of input signals . Meanwhile, “brute-force” parameter scans and direct numerical integration as done for Fig. 2 and in Ref. 19 can be computationally expensive.
Hence, in the following, we develop an analytical framework which allows to find the system parameters which exhibit specific types of orbits in a more direct manner, without simulating the underlying system. For instance, given fixed input signals, as, e.g., given by Eq. (6), which coupling strengths enable computations for which .
As the transitions between different are closely connected to period-one orbits (or, more precisely, their absence), we put a particular emphasis on the description of period-one orbits, before generalizing our description to arbitrary dynamics.
IV. PHASE FORMALISM FOR MULTIPLICATIVE COUPLING
V. ANALYTICAL CHARACTERIZATION OF DYNAMICAL REGIONS
In the following sections, we derive sets of analytical conditions to determine whether a specific periodic orbit is consistent, given specific neuron models, neuron frequencies , and coupling strength . In doing so, we follow the general approach proposed in Ref. 24. Solving these conditions for the global coupling strength yields the adequate choices for the desired dynamics. In particular, we identify coupling strengths which lead to collective dynamics with a specific number of winners , and hence calibrate the -WTA system more directly, with no need for numerical simulations of the collective dynamics.
Every periodic orbit can be characterized by its sequence of neuron resets , , where the sequence length is the total number of reset events in the given orbit. For mathematical analysis, it is convenient to encode the reset order of neurons in terms of a mapping , such that . We also use to refer to the orbit itself. The subset of spiking neurons within a specific orbit is , which contains neurons. The complement of , contains the – neurons which do not reset in the considered orbit and conclusively remain silent.
Given a full description of the system parameters, the event sequence , or equivalently, the mapping , uniquely defines a physical orbit, that is, determines phases of all neurons at any times, as well as the event times , at which the th neuron in the considered orbit, neuron , spikes, relative to the time of the first reset event. We denote the time step between two successive reset events at times and of neurons and , respectively, as . As the orbits are periodic by definition, we treat as cyclical, so that and . Also note that the description of a physical periodic orbit in terms of a mapping are unique only up to cyclical permutations of .
The approach for finding system parameters which allow for a specific orbit as given by is as follows. First, we derive a set of periodicity conditions, each one guaranteeing the periodicity of a single reset event , at time . Second, we have to ensure that the assumed spike ordering is consistent in so far that all time steps are positive. Third, we have to guarantee that no neuron reaches threshold when the system is in the orbit , leading to another set of inequalities. Finally, we have to ensure that neurons reach the threshold only at reset times with , leading to additional inequalities.
In Secs. V A–V C, we exemplarily demonstrate the approach for period-one dynamics, that is, orbits where each of the spiking neurons resets exactly once, so that the number of reset events is equal to the number of winners . In doing so, we illustrate that typically violations of the inequalities correspond to transitions between different dynamical regions or numbers of winners . For a discussion of how to treat arbitrary orbits, we refer to Appendix A.
A. Analytical conditions for period-one dynamics
In the following, we provide an analytical description of period-one orbits with fixed and derive expressions for the transitions to other dynamical regimes. In doing so, we explore the microscopic mechanisms mitigating these transitions and demonstrate how to characterize parameter choices that lead to a desired computational functionality.
1. Periodicity of reset events
2. Constraint (1): Positive time steps
3. Constraint (2): Only k neurons reach threshold
Note that Eq. (22) closely resembles the periodicity Eq. (18), but without a reset before the first (or after the last) line. Also note that because of , the phases at later times are always larger than they would be with a reset at time , given fixed and free frequency . This implies that if a neuron with free frequency is reset in a given orbit, , also all other neurons with must reset in the given orbit, as otherwise condition (20) is violated for at the time where neuron spikes. Indeed this is a central property of all self-consistent periodic orbits for the considered system type: The spiking neurons in a given orbit are always the neurons with the largest input signal and therefore the largest free frequency. This also implies that if two neurons and have the same frequency , it is not possible to have one neuron spike and the other one be silent, regardless of the coupling strength, restricting the possible options for the number of winners .
4. Constraint (3): Neurons reach threshold only at correct time
B. Period-one dynamics for linear neuron potential
1. Periodicity of reset events
2. Constraint (1): Positive time steps
3. Constraint (2): Only k neurons reach threshold
Note that the critical value does not depend on the order in which the neurons spike in the considered orbit so that for , no period-one dynamics with given are consistent, regardless of the order of spiking neurons. In fact, also non-period-one orbits with sequence length with given are not possible below the critical value , as these require even higher coupling strengths than the period-one dynamics with the same , compare Fig. 2. As typically (for ) and (see the next section), the critical value usually is the lower boundary of the coupling strength for a fixed number of winners , regardless of the orbit type, see Fig. 3.
What happens on a mechanistic level, if one starts with a consistent orbit satisfying condition (37) and continuously decreases the coupling strength, ? According to Eq. (39), the maximum value of the phase within the orbit continuously increases, and formally, at coupling strength , takes on just at as the -fastest neuron (neuron ) spikes. However, as the reset pattern does not account for a reset of neuron , at the periodic orbit becomes inconsistent and a discontinuous change to an orbit with spiking neurons, in which also neuron spikes, occurs. Typically, the coupling strength is still too high for period-one dynamics with the new number of winners , so that we observe a more complex orbit with , in which faster neurons spike multiple times, while two or more slower neurons take turns. For a typical example, see Fig. 3.
4. Constraint (3): Neurons reach threshold only at correct time
As we are interested in values of the coupling strength for which condition (45) is satisfied for all tuples , for given , , and spiking order we have to check condition (45) only for the tuple that maximizes .
Which ordering of neurons represents the most stable period-one orbit fulfilling condition (45), that is, gives the smallest maximum phase ? Considering Eq. (44) again, we note that is larger for larger as well as for larger ratios . We find the most stable reset pattern permutation to be the one in which all neurons reset in ascending order of their free frequencies and then start with the slowest neuron again (standard ordering), see Fig. 3. This is because for the ascending patterns, the smaller the number of reset events from to , the larger the ratio becomes such that both contributions compete. Hence, in order to identify values of where any type of period-one dynamics with given is possible, we consider only orbits with ascending free frequencies.
In Appendix C, we show that, for any period-one orbits with standard ascending ordering, if Eq. (45) is satisfied for , it also holds for any other , so that we have to consider only for evaluating condition (45). The interpretation is that the first violation of the condition (45) happens in terms of some neuron reaching threshold exactly one event “too early.”
What happens at the point where condition (46) is violated? For , the phase of at least one neuron starts to reach the threshold at some time which does not correspond to the proper reset event as defined by the mapping . Depending on the input configuration and system details, at either a more complicated orbit with , but the same number of winners , occurs, or the systems dynamics changes directly to period-one dynamics with a new number of winners . However, for , we usually observe the first case. For a typical example, consider Fig. 3.
5. Combining all constraints
Figure 4 illustrates the identification of parameter regions with period-one dynamics for linear neurons with equally spaced input signals , for different choices of the input spacing parameter . Figure 4(a) shows both simulation results as well as the derived boundaries for period-one dynamics for different numbers of winners . Indeed, the analytical boundaries are consistent with the simulation results. Figure 4(b) focuses on a smaller parameter region with , illustrating the transition between different . While the lower boundary of the coupling strength for period-one dynamics with given (solid black line) corresponds also to a transition to a larger value of , the upper boundary of the coupling strength (dashed black line) leads to more complicated orbits with , but with the same number of winners. For , we find throughout.
We point out that for linear neuron dynamics a closed-form solution for the time steps , is not only possible for period-one dynamics, but also for arbitrary complex dynamics with , see Appendix B.
C. Period-one dynamics for nonlinear neuron potential
For nonlinear free (uncoupled) neuron dynamics, or more specifically, free neuron models which lead to nonlinear transfer functions, the constraints can generally not be solved in closed form. However, the corresponding equations can be solved for the critical coupling strengths easily by using standard methods for root finding. As such nonlinear neuron dynamics are commonly used for modeling biological systems, as well as in engineering application, we briefly describe the procedure, which treats each set of constraints (1), (2), and (3) [Eqs. (19), (20), and (23)] separately.
D. Analytical characterization of more complex dynamics
Depending on the distribution of the input signals , the range of choices of the coupling strength which lead to period-one dynamics might be significantly smaller than for equally spaced inputs. Hence, it can be advantageous to also consider more complicated orbits with reset sequence lengths for computational applications. The analytical description of period-one orbits and the derivation of consistency constraints as demonstrated in Sec. V A can be generalized in a straightforward way also to arbitrary reset patterns. Furthermore, for linear neuron models, also for a closed-form solution for the time intervals is possible. While for the mathematical description we refer to the appendix, here we briefly illustrate results for a specific class of more complex orbits with , where the fastest neurons reset twice per period, while the and fastest neuron reset only once, see Fig. 6(a). In the considered system class, these represent one of the most common more complex orbit types.
In order to find all choices of which lead to a dynamic with sequence length for given , in principle we have to check all permutations of the pattern as shown in Fig. 6(a) for consistency. However, similar to the period-one case, we find that patterns in which the fastest neurons spike in ascending order regarding their frequency are the most stable ones, so that only their consistency has to be checked explicitly. Figure 6 illustrates the analytical prediction of dynamics for the same input configurations as in Figs. 5(d)–5(i). Apparently, while unevenly or closely spaced input signals might widen the transition stripes with and reduce the range of coupling strengths that correspond to period-one dynamics, by considering also dynamics with an effective re-calibration is still possible. In the same manner, even more complicated dynamics can be explicitly identified, potentially opening up the whole parameter space for computational uses.
VI. CONCLUSION
The question of how spiking neural networks can compute in resilient and reconfigurable ways constitutes a general open problem. Here, we have explored a recent implementation of -WTA computations, basic computational tasks that can be assembled to perform universal forms of computations, and studied their capabilities for performing resilient and reconfigurable computations. Transferring the standard phase description for oscillatory neurons to multiplicative interactions, we derived analytical conditions for the parameter regions where specific basic periodic spike sequences exist. Our results identify regions in the parameter space of coupling strengths and signal strength differences where the system can perform -WTA tasks for different given . Furthermore, we illuminate the mechanisms underlying the transition between regions with different dynamics and thus computational options. Going beyond basic periodic spike sequences, we also provide a general description for arbitrary periodic spike sequences and demonstrate how exploiting more complicated orbits may allow for a wider choice of coupling strengths, potentially making the computation more resilient. While in this work we illustrate our results for (approximately) equally spaced input signals with known input differences, the proposed method works for any type of input configuration. In particular, generalization to input vectors drawn from specific random distributions should be addressed in future work to address the open question of how to ensure robust computations also when there is only moderate knowledge about the range of expected input signals.
ACKNOWLEDGMENTS
This research was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Grant No. 419424741 and co-financed with tax funds on the basis of the budget adopted by the Saxon State Parliament through a TG70 grant (No. 10040011) (TransparNet).
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Georg Börner: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Resources (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Fabio Schittler Neves: Conceptualization (equal); Resources (equal); Writing – original draft (equal); Writing – review & editing (equal). Marc Timme: Conceptualization (lead); Funding acquisition (lead); Resources (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
APPENDIX A: ANALYTICAL DESCRIPTION OF ARBITRARY ORBITS
While in the main manuscript we focus on period-one dynamics, in which each neuron resets at most once per orbit period, in the following we extend our description to arbitrary reset patterns. Again we start by stating periodicity conditions and afterwards introduce three different types of additional constraints given in terms of inequalities.