Nowadays, experimental techniques allow scientists to have access to large amounts of data. In order to obtain reliable information from the complex systems that produce these data, appropriate analysis tools are needed. The Kalman filter is a frequently used technique to infer, assuming a model of the system, the parameters of the model from uncertain observations. A well-known implementation of the Kalman filter, the unscented Kalman filter (UKF), was recently shown to be able to infer the connectivity of a set of coupled chaotic oscillators. In this work, we test whether the UKF can also reconstruct the connectivity of small groups of coupled neurons when their links are either electrical or chemical synapses. In particular, we consider Izhikevich neurons and aim to infer which neurons influence each other, considering simulated spike trains as the experimental observations used by the UKF. First, we verify that the UKF can recover the parameters of a single neuron, even when the parameters vary in time. Second, we analyze small neural ensembles and demonstrate that the UKF allows inferring the connectivity between the neurons, even for heterogeneous, directed, and temporally evolving networks. Our results show that time-dependent parameter and coupling estimation is possible in this nonlinearly coupled system.

The Kalman filter is a popular technique that can be employed to infer the parameters of a model given uncertain observations, and it has found applications in diverse fields. In the field of neuroscience, it has been used, for example, to estimate the parameters of neural models and for real-time decoding of brain signals for brain–machine interfaces. However, the neural models that have been considered contain a large number of parameters, which makes a systematic exploration of the parameter space unfeasible. Here, we study a neural model, the Izhikevich model (IM), which realistically reproduces many neural states, even though it is computationally low-cost. Having a small number of parameters and, at the same time, showing very rich dynamical regimes, the Izhikevich model is an ideal candidate for a systematic exploration of the parameter space and the study of neurons coupled with different topologies. We analyze the suitability of the Kalman filter to estimate the model’s parameters and we discuss its main limitations.

One of the main challenges that neuroscience has faced for a long time is the determination of brain topology, which is morphologically diverse and complex. In addition, the elements that form the brain network, the neurons, are also diverse and complex. Neurons show reproducible nonlinear responses to stochastic stimuli.1 Hence, they can be modeled as stochastic nonlinear dynamical systems.2 

Although much progress has been made on the relationship between topology and dynamics in the brain, scientists are still far from having a good understanding.3,4 Mathematical models of the whole brain or just of a tiny fraction of billions of neurons, as well as information-based data analysis techniques are powerful tools for shedding light on the above relationship.5 

However, a realistic estimation of the models’ states and parameters is a very difficult challenge, and different approaches based on control theory have been developed.6 A well-known method is the Kalman filter.7–9 

The Kalman filter allows inferring optimal parameters of a model given uncertain observations, balancing the effects of measurement noise, disturbances, and model uncertainties and has found applications in many fields of science and technology.10 In neuroscience, the Kalman filter has been used, for example, for decoding brain signals for brain-machine interfaces.11–13 It has also been used to estimate the parameters of neural models.14–19 However, the models that have been considered, such as the Morris–Lecar or the Hodgkin–Huxley, contain a large number of parameters that make a systematic exploration of the parameter space unfeasible. Here, we study the Izhikevich model20 because it reproduces many important properties of biological neurons and, at the same time, has a small number of parameters and is computationally low-cost.21 Therefore, the Izhikevich model is an ideal candidate for a systematic exploration of the parameter space allowing a study of small ensembles of coupled neurons.

We analyze under which conditions a nonlinear version of the Kalman filter, the unscented Kalman filter (UKF),22,23 provides a good estimation of the IM parameters and we discuss its main limitations. We show that the UKF is able to recover the parameters of an isolated neuron and the external current that is exciting its activity. We also show that the UKF is able to do so even in the case of time-dependent input currents. Then, we study small networks with different topologies, with both electrical and chemical couplings, and show that UKF is able to recover the topology of the network using observations of the dynamic variables, assuming the coupling strength, electrical or chemical, and all the internal parameters are known.

The Izhikevich model (IM) was introduced by Izhikevich20 as an alternative to more realistic but computationally expensive neuron models.24 Despite its simplicity, it can be used to model a broad variety of neuron types21 and dynamical regimes. Here, we will focus on single Izhikevich neurons in the chaotic regime—that is, neurons for which the spiking dynamics is irregular, aperiodic—and small networks of chaotic neurons linked by electrical or chemical couplings.

The state of an Izhikevich neuron i is fully specified by two state variables. x i represents the neuron membrane potential and y i represents the membrane recovery variable accounting for the activation of the ionic currents.

Let [ x 1 , y 1 , , x i , y i , ] T be the state vector of the neurons, the equations governing the system are given by
x ˙ i = 0.04 x i 2 + 5 x i + 140 y i + I + E i + C i + σ Z ξ i x , y ˙ i = a ( b x i y i ) + σ Z ξ i y ,
(1)
with the after-spike reset condition,
if x i > 30 , then { x i c , y i y i + d .
(2)

a is a small parameter representing the slow time-scale of y i, b is the coupling strength between the state variables, and the external currents are modeled by I. All parameters here, including x, y, and time, are dimensionless. The parameters a, b, c, and d can be fitted to obtain a specific firing pattern of the neuron. The last term in Eq. (1) represents random fluctuations and we refer to it as dynamic or process noise. ξ i x and ξ i y represent Gaussian white noises with zero mean and unity variance. σ Z is the noise strength and for simplicity, it is the same for x and y. For a system of N neurons, the dynamical noise can be thought as a random 2 N-dimensional vector with zero mean and covariance matrix Q ¯ Z = σ Z 2 I, where I is the identity matrix.

The electrical coupling between neurons is described by a system of ordinary first-order differential equations with different levels of detail that represent various degrees of physiological descriptions.25 Here, we consider the simplest coupling, namely, linear diffusive coupling, E i is given by
E i = g e j = 1 N A i j e ( x j x i ) ,
(3)
where g e is the coupling conductance and A i j e are the coefficients of the adjacency matrix: A i j e = 1 whenever neuron i is connected to neuron j, otherwise A i j e = 0.
The coupling C i comprises inputs delivered through chemical synapses to neuron i from all other neurons in the network. It is given by26,
C i = g c ( x i μ s ) j = 1 N A i j c ζ ( x j ) .
(4)
g c is the synaptic coupling strength, μ s is the reversal potential, and the sigmoid function ζ ( x ) is defined as
ζ ( x j ) = [ 1 + exp ( ϵ ( x j θ ) ) ] 1 ,
(5)
where ϵ controls the slope of the sigmoidal function and θ is the synaptic firing threshold. This function represents the activation of the postsynaptic current when a presynaptic neuron sends an action potential, that is, when x becomes larger than θ. Hence, a neuron i receives a chemical synapse from a neuron j only if x j is larger than θ. The value of the prefactor ( x i μ s ) in Eq. (4) controls whether the synapses are inhibitory or excitatory. In particular, we chose μ s such as ( x i μ s ) < 0, that is, inhibitory chemical synapses. The numerical values of all parameters are given in Table I.27 
TABLE I.

Dimensionless parameters20 used in the simulations of the IM.

a b c d I (Is) ge gc μs ϵ θ α ω σZ σν
0.2  −56  −16  −99  (0, 0.1)  (0, 0.05)  35  0.15  0.025  0.15 
a b c d I (Is) ge gc μs ϵ θ α ω σZ σν
0.2  −56  −16  −99  (0, 0.1)  (0, 0.05)  35  0.15  0.025  0.15 

Throughout this study, we use symmetric A e matrices and asymmetric A c matrices, because the electrical coupling is symmetric, but the chemical coupling is directional. Here, we only focus on a proof-of-concept demonstration of the UKF’s ability to infer coupling topology; in the future, we plan to study the more challenging (and realistic) scenario of heterogeneous, excitatory, or inhibitory chemical synapses.

The Kalman filter makes a prediction for the future state of a system, resulting from the state evolution of a dynamical model, and then corrects it using the information coming from experimental data. Even though it was originally developed for linear systems, soon it was extended to include nonlinearities. Different nonlinear extensions were created. The UKF is one nonlinear version of the filter that has a good performance in terms of computational effort.

Following the notation in Forero-Ortiz et al.,28 we consider the extended state u ¯ as the vector given by the state variables x i and y i of the N neurons ( i = 1 , , N ) and all the parameters we want to retrieve. Our process model to be employed in the UKF will be u ¯ k + 1 = a ¯ ( u ¯ k ), where k is the timestep index. a ¯ is given by the deterministic part of Eq. (1) for the state variables and is the identity operator for the parameters, as we assume they are constant. In the UKF algorithm, the estimation for u ¯ k + 1 predicted using the stochastic dynamical model is corrected by an experimental piece of data. However, these experimental data will necessarily have some uncertainty resulting from the measurement process, represented by a measurement function. Our measurement function is a selection of the state variables from the extended vector u ¯ k, which are perturbed by the measurement noise with standard deviation σ ν: x i x i + σ ν χ i x and y i y i + σ ν χ i y, where χ represents Gaussian white noise. Thus, the covariance matrix of the measurement noise will be Q ¯ ν = σ ν 2 I. The covariance of the estimated state is P = σ P I, which is evolved by the UKF algorithm from the initial values given in Table II.

TABLE II.

Initial guesses of the parameters used by the UKF.

a b c d I (Is) ge gc α ω σP
(0.01, 0.9)  (0.01, 5)  ( − 70, − 40)  ( − 25, − 5)  ( − 94, − 104)  (0, 0.1)  (0, 0.05)  (1, 5)  (0.1, 1)  0.01 
a b c d I (Is) ge gc α ω σP
(0.01, 0.9)  (0.01, 5)  ( − 70, − 40)  ( − 25, − 5)  ( − 94, − 104)  (0, 0.1)  (0, 0.05)  (1, 5)  (0.1, 1)  0.01 

To generate the synthetic data that we use as experimental observations, we numerically solve Eq. (1) with a fourth-order Runge–Kutta method, an integration step of d t = 0.01, and the parameters reported in Table I, keeping the measurements with sampling rate equal to the integration step. With these parameters single (uncoupled) neurons display chaotic dynamics,27 as depicted in Fig. 1(a). Initial conditions for the simulations were drawn from a normal distribution centered at a fixed point of Eq. (1) in the case of no coupling, ( 56.25 , 112.5 ), with a standard deviation equal to 1.

FIG. 1.

(a) Time evolution of the variables x ( t ) (upper curve) and y ( t ) (lower curve) of an isolated Izhikevich neuron, simulated with Eq. (1) with parameters given in Table I. (b) Response to an external modulated current I ( t ) = I s + α sin ( ω t ). In each panel, the current input is represented by the dashed line. The values of the parameters are given in Table I.

FIG. 1.

(a) Time evolution of the variables x ( t ) (upper curve) and y ( t ) (lower curve) of an isolated Izhikevich neuron, simulated with Eq. (1) with parameters given in Table I. (b) Response to an external modulated current I ( t ) = I s + α sin ( ω t ). In each panel, the current input is represented by the dashed line. The values of the parameters are given in Table I.

Close modal

Throughout the study, we employ the UKF implemented by the Python package FilterPy.29 The confidence in the process model ( Q ¯ Z) and the measurements ( Q ¯ ν) are kept constant (see Table I). An initial transient of 50 000 timesteps was discarded in all runs.

The UKF requires an initial guess for the parameters that we want to estimate. To test the robustness of the UKF, we consider different initial guesses for each run, which are selected from a uniform distribution in the ranges given in Table II.

To quantify the performance of the UKF in recovering the adjacency matrix g e A K F = G e K F, we use the Euclidean distance between the original and the recovered matrix,
D ( G , G K F ) = i , j ( G i , j G i , j K F ) 2 .
(6)

We quantify the performance using the full coupling matrix G, as we want to test not only if the UKF is able to reconstruct the connectivity but also if it can devise the correct coupling strength without being informed that the coupling strength is the same for all links. Also, we chose to use the Euclidean distance because it is a simple, straightforward measure to compare two graphs of weighted links with a single figure.

First, we illustrate the effectiveness of the UKF in estimating the parameters of a single Izhikevich neuron. The parameters that we attempt to estimate are a , b , c , d , and I. Note that the parameters c and d only appear in the resetting dynamics, therefore the UKF can only update them in the event of a spike. Moreover, the equation for y ˙ contains a product a b, which can increase the uncertainty of the estimation. For example, an underestimation of a can compensate an overestimation of b. To avoid such problems, we estimated a b and a independently.

To recover the unknown parameters, we consider 100 simulated time series as input, each with a different initial parameter guess drawn uniformly from the intervals reported in Table II. These intervals have been chosen because in those ranges the spiking of the neuron will be chaotic, which is a piece of information we can infer from the spike sequence. Results are shown in Fig. 2. For all cases, the real value of the parameter is within the range of the standard deviation. As the estimation of c and d is only updated when the neuron spikes, the duration of the simulated time series required to obtain a reliable estimation is larger than for the other parameters.

FIG. 2.

Parameter estimation for a single neuron as a function of the simulation time. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles, and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table I.

FIG. 2.

Parameter estimation for a single neuron as a function of the simulation time. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles, and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table I.

Close modal

We point out that using the UKF to estimate c and d is unnecessary because a direct estimation of these parameters can be done easily by checking the values of x and y after a spike.

Now, we test the estimation of time-varying parameters. Specifically, we consider a sinusoidal external current, I ( t ) = I s + 3 α sin ( ω t ), and estimate a, b, and I ( t ) (wrongly assuming that the current is constant). The effect of such a current on the neuron’s dynamics is shown in Fig. 1(b), where we see that bursts of spikes are followed by periods of subthreshold oscillations. Since at constant I the Izhikevich model displays a great variety of dynamical behaviors including bursting,20 the inspection of the time series does not provide evidence of the presence of a sinusoidal input current.

The results of the parameter estimation are shown in Fig. 3. The recovered values of a and b are comparable to those obtained in the previous parameter estimation (see Fig. 2). The estimated value of I oscillates with a frequency equal to ω, suggesting that I is not constant.

FIG. 3.

Parameter estimation for a single neuron with a time-dependent external current, which is modeled as a constant input. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles, and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table I.

FIG. 3.

Parameter estimation for a single neuron with a time-dependent external current, which is modeled as a constant input. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles, and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table I.

Close modal

Next, we substitute the expression of I ( t ) in the model, Eq. (1), and separately estimate I s, α, and ω. In this case, we also need to include time as an additional dimension of the extended vector space, with dynamic equation t ˙ = 1. The results of this approach are shown in Fig. 4. The UKF can estimate the correct parameters of the oscillation in the majority of cases. However, large departures from the correct values can be observed, which could be due to the fact that the model with constant I can produce similar output dynamics.

FIG. 4.

As Fig. 3, but explicitly modeling the input current as I = I s + α sin ( ω t ). The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table I.

FIG. 4.

As Fig. 3, but explicitly modeling the input current as I = I s + α sin ( ω t ). The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table I.

Close modal

We now consider small networks of Izhikevich neurons, and we investigate the capacity to recover the adjacency matrix A assuming that the coupling strengths, g e and g c, and all the internal parameters are known. Of course, this is not possible in experiments, and we use these assumptions as a first step for testing the neural network reconstruction problem using the UKF approach: if, given these assumptions, the network cannot be inferred, we can conclude that the UKF approach is not useful; on the other hand, if we succeed in reconstructing the network with these assumptions, as a next step we will test the UKF approach having less information, for instance, assuming a different neuron model, unknown internal parameters, unknown coupling strengths. We run the UKF algorithm, considering each element of A as an additional dimension of the extended vector u ¯.

We consider networks with N = 4 neurons, which can be seen as building blocks of bigger networks. However, we must keep in mind that complex systems usually display emergent collective behavior when the number of elements is large enough, and therefore, while the UKF algorithm may succeed in reconstructing the topology of a small network, the collective behavior that may emerge for a large enough number of neurons (and/or the large number of parameters to be inferred), will probably cause the UKF algorithm to fail. Therefore, the study of the role of the network size is, of course, important and additional work is planned, which will be reported elsewhere.

The network topologies are shown in  Appendix (see Figs. 8 and 9). The simulated time evolution of the membrane potentials of all nodes for each network topology are also shown in Figs. 8 and 9. The network dynamics differ in their level of synchronization. First, we consider electrical coupling only ( g c = 0), such that the adjacency matrix is symmetric and we only need to determine its upper triangular elements G e = g e A e with g e = 0.05.

FIG. 5.

Evolution of the distance D, which represents the Euclidean distance between real coupling matrix G e and the estimated one G e K F. N1, N2, N3, N4, and N5 represent different topologies, as shown in Fig. 8. For all topologies D decreases sharply at first, then it saturates below 10 2. The coupling strength is g e = 0.05 and the dynamics displayed by the networks are shown in Fig. 8.

FIG. 5.

Evolution of the distance D, which represents the Euclidean distance between real coupling matrix G e and the estimated one G e K F. N1, N2, N3, N4, and N5 represent different topologies, as shown in Fig. 8. For all topologies D decreases sharply at first, then it saturates below 10 2. The coupling strength is g e = 0.05 and the dynamics displayed by the networks are shown in Fig. 8.

Close modal
FIG. 6.

Evolution of the distance D for (a) electrical coupling and (b) chemical coupling, with coupling strengths g e = 0.1 and g c = 0.05. The distance between the original and the estimated adjacency matrices, D ( G , G K F ), decreases sharply with simulation time, saturating below 10 2 for all network topologies. The dynamics displayed by the networks are shown in Fig. 9.

FIG. 6.

Evolution of the distance D for (a) electrical coupling and (b) chemical coupling, with coupling strengths g e = 0.1 and g c = 0.05. The distance between the original and the estimated adjacency matrices, D ( G , G K F ), decreases sharply with simulation time, saturating below 10 2 for all network topologies. The dynamics displayed by the networks are shown in Fig. 9.

Close modal
FIG. 7.

Evolution of the distance D between the adjacency matrix and the estimated one in the case of time-dependent coupling. D is depicted as a function of time, for different coupling strengths. The vertical dashed line represents the instant in which the coupling is turned on and the horizontal one marks the zero.

FIG. 7.

Evolution of the distance D between the adjacency matrix and the estimated one in the case of time-dependent coupling. D is depicted as a function of time, for different coupling strengths. The vertical dashed line represents the instant in which the coupling is turned on and the horizontal one marks the zero.

Close modal
FIG. 8.

The five network topologies used when considering only electrical coupling [Eqs. (1) and (3)]. The links between nodes are represented by black lines. For each case, we show the network’s dynamics with g e = 0.05. The insets show the Kuramoto order parameter R and the synchronization error E r r.

FIG. 8.

The five network topologies used when considering only electrical coupling [Eqs. (1) and (3)]. The links between nodes are represented by black lines. For each case, we show the network’s dynamics with g e = 0.05. The insets show the Kuramoto order parameter R and the synchronization error E r r.

Close modal
FIG. 9.

The five network topologies used when considering both electrical and chemical coupling [Eqs. (1), (3), and (4)], the undirected links are electrical, and the directed links are chemical. The left column displays the network’s dynamics with g e = 0.10 and g c = 0.05. The insets show the Kuramoto order parameter R and the synchronization error E r r.

FIG. 9.

The five network topologies used when considering both electrical and chemical coupling [Eqs. (1), (3), and (4)], the undirected links are electrical, and the directed links are chemical. The left column displays the network’s dynamics with g e = 0.10 and g c = 0.05. The insets show the Kuramoto order parameter R and the synchronization error E r r.

Close modal

The results for the different topologies are displayed in Fig. 5, where we see that the UKF gives an excellent estimation of G e, as the Euclidean distance D ( G e , G e K F ) approaches 0.

To study the effect of chemical synaptic coupling in the UKF estimation, we add direct links between some nodes in the formerly symmetric networks, as shown in  Appendix (Fig. 8). We estimate two adjacency matrices, one encoding electrical coupling, g e A e = G e with A e = ( A e ) T, and the other encoding chemical coupling, g c A c = G c with A c ( A c ) T. We chose g e = 0.1 and g c = 0.05. Note that we increase g e compared to the previous case to test the robustness of the UKF against synchronized states. Since we use inhibitory synapses, the firing rate decreases slightly. In the limit of total synchronization the input through electric coupling, E i goes to zero, while C i is exactly the same for each element in the network.

We see in Figs. 6(a) and 6(b) that even with two coupling schemes, the chemical one being nonlinear, the UKF can estimate the correct coupling matrices. All networks are heterogeneous in the sense that the number of connections is not the same for the different neurons. As pointed out by Forero-Ortiz et al.,28 the UKF is robust against synchronization, which is confirmed here.

Here, we presented reconstruction results in the case of inhibitory synapses; however, we checked that the UKF provides similar results also for excitatory synapses and a mix of excitatory and inhibitory synapses, provided it knows which synapses are excitatory and which are inhibitory.

We highlight that for all cases the Euclidean distance D ( G , G K F ) saturates below 10 2. Furthermore, to verify that all the links were correctly estimated we classified the performance of the UKF using the Receiver Operating Characteristic (ROC) curve.30 If the UKF recovers the right connectivity, then the Area Under the ROC Curve (AUC)30 will be 1. For all cases studied, we obtained an AUC > 0.99, implying a perfect reconstruction of the underlying topologies, that is, the UKF predicts a link between two neurons i and j only if A i j = 1. We believe that the UKF is robust against noise as long as noise can be seen as a small perturbation to the system and the dynamics is not driven by it.

Finally, we consider temporal networks, in which G e , i j = g e A i j varies with time. This is the case in many applications of network theory,31 and in neuroscience, it is especially important since it can be linked to plasticity.32 

We model time-varying networks by considering couplings between neurons that switch on at a simulation time t = 200. More precisely, three single neurons connect at t = 200 in a linear chain, ( 1 2 3),
g e [ 0 0 0 0 0 0 0 0 0 ] g e [ 0 1 0 1 0 1 0 1 0 ] .
(7)
We assume that all the internal parameters are known and we only estimate the network’s topology.

The results are presented in Fig. 7. Before the coupling is switched on, the UKF has quickly inferred the absence of coupling (as D 0). After the coupling is switched on, D first increases sharply and then decreases steadily for all values of g e. This means that the UKF can detect the emergence of coupling and estimate the G e matrix correctly. However, the estimation after the change in the network takes more time than the initial estimation of the null adjacency matrix. This is because the covariance on the matrix coefficients will decrease, meaning high confidence in the inferred matrix before the coupling is switched. When the matrix is changed, the filter has to adjust to the new state, but the low covariance will make the convergence rate slow. Nevertheless, the filter is eventually able to recover the right network structure. Different initial models or state covariances are expected to impact the convergence time, both before and after turning on the coupling. Higher covariances will result in higher variability in the predictions. This variability is more efficient in capturing changes in the parameters. On the contrary, when covariances are smaller, the predictions are less prone to change and, thus, adapt to new values. The right choice of this parameter will result in a responsive system with sufficiently stable inferred parameters.

We studied the capability of the UKF for recovering the parameters of a single neuron and of small neural ensembles modeled with the Izhikevich model. We simulated the equations governing the system dynamics and used the simulated time series as experimental observations to feed the UKF algorithm, with confidence regulated by Q ¯ ν. The IM was the process model with confidence regulated by Q ¯ Z.

When the parameters of an isolated neuron are constant in time, the UKF is able to estimate all the parameters. Second, we studied an isolated neuron with a sinusoidal input current, which displayed bursting spike dynamics. Due to the rich variety of dynamical behaviors of the IM, it is not trivial to identify the cause of the bursting activity. Still, even when modeling the current as a constant, the UKF retrieved the neuron parameters and the average value of the current and suggested an oscillating current. When including the oscillating current in the process model, the UKF was able to provide a reasonable estimate of the amplitude ( α), the mean ( I), and the frequency of the oscillation ( ω).

We have also estimated the connectivity of small networks of Izhikevich neurons with known internal parameters. First, we analyzed the five possible network topologies for four neurons with undirected electrical coupling. Then, we added directed chemical connections to the same networks. The UKF was able to recover the connectivity for all the networks regardless of the synchronization level.

Finally, we addressed the problem of temporal networks by analyzing a network of three electrically coupled neurons, in which the topology changed from no coupling to a chain topology. The UKF was able to identify the change in the network and estimate the connectivity correctly

The results presented here were obtained considering measurements of both x and y. Beyond that, we conducted a preliminary analysis of the applicability of the UKF when only measurements of the x variable are available. Our results suggest that the UKF is still able to recover the parameters of a single neuron and the network connectivity. However, to obtain good estimates, the UKF hyperparameters had to be carefully tuned, in particular, the standard deviation σ ν and the initial condition for σ P.

As in experimental measurements, only short time series with limited temporal resolution can be recorded, further work is needed to clarify the impact of the duration of the time series and the sampling time. While the results presented here were obtained using each simulated data point (i.e., using the integration step as sampling time), preliminary studies suggest that the UKF is robust to downsampling up to 1:20, if the time series is long enough.

Future work should also address larger networks and different types of neurons. In fact, as discussed before, complex systems usually display emergent collective behavior when the number of elements is large enough. Therefore, the UKF algorithm may succeed in reconstructing the topology of a small network, but will probably fail for a large number of neurons, or when there is a large number of unknown parameters. Therefore, further work is planned to test the UKF algorithm when the networks are larger and when the internal and coupling parameters are unknown. While we expect that the UKF algorithm will fail to reconstruct the network, it may yield some information that can be useful for inferring some properties of the real network (e.g., the average degree, the degree distribution, the modularity, etc.).

Finally, it will be interesting to check if the UKF can differentiate between inhibitory and excitatory synapses.

R.P.A. acknowledges financial support from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior–Brasil (CAPES), Finance Code 001. H.A.C. thanks ICTP-SAIFR and FAPESP grant 2021/14335-0. G.T. and C.M. acknowledge the support of the ICREA ACADEMIA program of Generalitat de Catalunya and Ministerio de Ciencia e Innovación, Spain (Project No. PID2021-123994NB-C21).

The authors have no conflicts to disclose.

R. P. Aristides: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Software (equal); Writing – original draft (equal); Writing – review & editing (equal). A. J. Pons: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal). H. A. Cerdeira: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal). C. Masoller: Conceptualization (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal). G. Tirabassi: Conceptualization (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

The network topologies considered when studying electrical coupling between nodes are presented in Fig. 8. The black links represent undirected connections between nodes, so the resulting adjacency matrices are symmetric ( A = A T). The simulated time evolution of the membrane potentials of all nodes for each network topology is also shown in Fig. 8 on the right side, together with two synchronization measures, the Kuramoto order parameter33  R and the synchronization error E r r.

To evaluate the Kuramoto order parameter, we assign a phase ϕ for each neuron time series that grows linearly at each spike with a gain of 2 π as defined in Ivanchenko et al.34 The Kuramoto order parameter is given by
R = | j = 1 N e i ϕ j ( t ) | t N ,
(A1)
where N is the number of oscillators considered in the measure and the average is taken over time. For totally synchronized systems, R = 1. For totally unsynchronized systems, R 0.
Likewise, the synchronization error gives us an idea of how synchronized the system is, we apply it directly to the time series. First, we calculate the average membrane potential x ¯ of all oscillators in the network. Then, we compute how much each oscillator deviates from x ¯. Thus, the synchronization error is computed as
E r r = i = 1 N | x i ( t ) x ¯ ( t ) | N t .
(A2)
Hence, E r r = 0 in the case of total synchronization, where x i = x j, ( i , j ) [ 1 , N ], while for unsynchronized systems, E r r may assume large values.

When both electrical and chemical coupling between nodes are considered, we use the topologies presented in Fig. 9. The adjacency matrices are not symmetric ( A A T), and all the networks are heterogeneous, meaning that the nodes have a different number of connections. The simulated time evolution of the membrane potentials of all nodes for each network topology is also shown in Fig. 9 on the right side.

1.
B. Y. H.
L. Bryant
and
J. P.
Segundo
, “
Spike initiation by transmembrane current: A white-noise analysis
,”
J. Physiol.
260
(
2
),
279
314
(
1976
).
2.
D.
Sterratt
,
B.
Graham
,
A.
Gillies
, and
D.
Willshaw
,
Principles of Computational Modelling in Neuroscience
(
Cambridge University Press
,
2011
).
3.
D. S.
Bassett
and
O.
Sporns
, “
Network neuroscience
,”
Nat. Neurosci.
20
,
353
364
(
2017
).
4.
F.
Battiston
,
G.
Cencetti
,
I.
Iacopini
,
V.
Latora
,
M.
Lucas
,
A.
Patania
,
J.-G.
Young
, and
G.
Petri
, “
Networks beyond pairwise interactions: Structure and dynamics
,”
Phys. Rep.
874
,
1
92
(
2020
).
5.
M.
Goodfellow
,
R. G.
Andrzejak
,
C.
Masoller
, and
K.
Lehnertz
, “
What models and tools can contribute to a better understanding of brain activity?
,”
Front. Netw. Physiol.
2
,
907995
(
2022
).
6.
J.
Bechhoefer
,
Control Theory for Physicists
(
Cambridge University Press
,
2021
).
7.
E. R.
Kalman
, “
A new approach to linear filtering and prediction problem
,”
J. Basic Eng.
82
,
35
45
(
1960
).
8.
S.
Haykin
,
Neural Networks and Learning Machines
(
Pearson Education, Inc.
,
2009
), p.
906
.
9.
C.
Bishop
, Pattern Recognition and Machine Learning, Information Science and Statistics (Springer New York, 2016).
10.
S. L.
Brunton
and
J. N.
Kutz
,
Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control
(
Cambridge University Press
,
2019
).
11.
Z.
Li
,
J. E.
O’Doherty
,
T. L.
Hanson
,
M. A. L. C. S.
Henriquez
, and
M. A. L.
Nicolelis
, “
Unscented Kalman filter for brain-machine interfaces
,”
PLoS One
4
,
e6243
(
2009
).
12.
J. L.
Contreras-Vidal
,
M.
Bortole
,
F. S.
Zhu
,
K. N. A.
Venkatakrishnan
,
G. E.
Francisco
,
R.
Soto
, and
J. L.
Pons
, “
Neural decoding of robot-assisted gait during rehabilitation after stroke
,”
Am. J. Phys. Med. Rehab.
97
,
541
550
(
2018
).
13.
S. R.
Nason
,
M. J.
Mender
,
A. K.
Vaskov
,
M. S. W. N. G.
Kumar
,
T. A.
Kung
,
P. G.
Patil
, and
C. A.
Chestek
, “
Real-time linear prediction of simultaneous and independent movements of two finger groups using an intracortical brain-machine interface
,”
Neuron
109
,
3164
3177
(
2021
).
14.
M. J.
Moye
and
C. O.
Diekman
, “
Data assimilation methods for neuronal state and parameter estimation
,”
J. Math. Neurosci.
8
,
11
(
2018
).
15.
O. J.
Walch
and
M. C.
Eisenberg
, “
Parameter identifiability and identifiable combinations in generalized Hodgkin-Huxley models
,”
Neurocomputing
199
,
137
143
(
2016
).
16.
M.
Lankarany
,
W. P.
Zhu
, and
M. N. S.
Swamy
, “
Joint estimation of states and parameters of Hodgkin-Huxley neuronal model using Kalman filtering
,”
Neurocomputing
136
,
289
299
(
2014
).
17.
K.
Campbell
,
L.
Staugler
, and
A.
Arnold
, “
Estimating time-varying applied current in the Hodgkin-Huxley model
,”
Appl. Sci.
10
,
2
(
2020
).
18.
J. A. M.
Valle
and
A. L.
Madureira
, “
Parameter identification problem in the Hodgkin-Huxley model
,”
Neural Comput.
34
,
939
970
(
2022
).
19.
S. J.
Schiff
, Neural Control Engineering: The Emerging Intersection Between Control Theory and Neuroscience, Computational Neuroscience (MIT Press, Cambridge, MA, 2012), p. 361.
20.
E. M.
Izhikevich
, “
Simple model of spiking neurons
,”
IEEE Trans. Neural Netw.
14
,
1569
(
2003
).
21.
E. M.
Izhikevich
,
Dynamical Systems in Neuroscience
(
MIT Press
,
2007
).
22.
S. J.
Julier
and
J. K.
Uhlmann
, “New extension of the Kalman filter to nonlinear systems,” in Signal Processing, Sensor Fusion, and Target Recognition VI, International Society for Optics and Photonics Vol. 3068, edited by I. Kadar (SPIE, 1997), pp. 182–193.
23.
E.
Wan
and
R.
Van Der Merwe
, “The unscented Kalman filter for nonlinear estimation,” in Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No. 00EX373) (IEEE, 2000), p. 153.
24.
E. M.
Izhikevich
, “
Which model to use for cortical spiking neurons?
,”
IEEE Trans. Neural Netw.
15
,
1063
1070
(
2004
).
25.
J.
Collens
,
K.
Pusuluri
,
A.
Kelly
,
D.
Knapper
,
T.
Xing
,
S.
Basodi
,
D.
Alacam
, and
A. L.
Shilnikov
, “
Dynamics and bifurcations in multistable 3-cell neural networks
,”
Chaos
30
,
072101
(
2020
).
26.
D.
Somers
and
N.
Kopell
, “
Rapid synchronization through fast threshold modulation
,”
Biol. Cybern.
68
,
393
(
1993
).
27.
S.
Nobukawa
,
H.
Nishimura
,
T.
Yamanishi
, and
J.-Q.
Liu
, “
Analysis of chaotic resonance in Izhikevich neuron model
,”
PLoS One
10
,
e0138919
(
2015
).
28.
E.
Forero-Ortiz
,
G.
Tirabassi
,
C.
Masoller
, and
A. J.
Pons
, “
Inferring the connectivity of coupled chaotic oscillators using Kalman filtering
,”
Sci. Rep.
11
,
22376
(
2021
).
29.
30.
T.
Fawcett
, “
An introduction to ROC analysis
,”
Pattern Recog. Lett.
27
,
861
874
(
2006
).
31.
P.
Holme
and
J.
Saramäki
, “
Temporal networks
,”
Phys. Rep.
519
,
97
(
2012
).
32.
A.
Citri
and
R.
Malenka
, “
Synaptic plasticity: Multiple forms, functions, and mechanisms
,”
Neuropsychopharmacol
33
,
18
(
2008
).
33.
Y.
Kuramoto
,
Chemical Oscillations, Waves and Turbulence
(
Springer
,
New York
,
1984
).
34.
M. V.
Ivanchenko
,
G. V.
Osipov
,
V. D.
Shalfeev
, and
J.
Kurths
, “
Phase synchronization in ensembles of bursting oscillators
,”
Phys. Rev. Lett.
93
,
134101
(
2004
).
Published open access through an agreement with Universitat Politecnica de Catalunya Departament de Ciencia dels Materials i Enginyeria Metallurgica