Computational modeling of neuroactivity plays a central role in our effort to understand brain dynamics in the advancements of neural engineering such as deep brain stimulation, neuroprosthetics, and magnetic resonance electrical impedance tomography. However, analytic solutions do not capture the fundamental nonlinear behavior of an action potential. What is needed is a method that is not constrained to only linearized models of neural tissue. Therefore, the objective of this study is to establish a robust, straightforward process for modeling neurodynamic phenomena, which preserves their nonlinear features. To address this, we turn to decomposition methods from homotopy analysis, which have emerged in recent decades as powerful tools for solving nonlinear differential equations. We solve the nonlinear ordinary differential equations of three landmark models of neural conduction—Ermentrout–Kopell, FitzHugh–Nagumo, and Hindmarsh–Rose models—using George Adomian’s decomposition method. For each variable, we construct a power series solution equivalent to a generalized Taylor series expanded about a function. The first term of the decomposition series comes from the models’ initial conditions. All subsequent terms are recursively determined from the first. We show rapid convergence, achieving a maximal error of $<10\u221212$ with only eight terms. We extend the region of convergence with one-step analytic continuation so that our complete solutions are decomposition splines. We show that this process can yield solutions for single- and multi-variable models and can characterize a single action potential or complex bursting patterns. Finally, we show that the accuracy of this decomposition approach favorably compares to an established polynomial method, B-spline collocation. The strength of this method, besides its stability and ease of computation, is that, unlike perturbation, we make no changes to the models’ equations; thus, our solutions are to the problems at hand, not simplified versions. This work validates decomposition as a viable technique for advanced neural engineering studies.

## I. INTRODUCTION

New advanced MRI techniques allow for *in vivo* quantitative inquiries of properties of tissue by perturbing it in synchrony with the imaging pulse sequence such that its response is encoded in the phase component of the complex magnetic resonance (MR) signal.^{1} From the phase map, we can then calculate the key parameters that can serve as biomarkers for tissue health. For example, in brain MR elastography, the tissue is vibrated, and the mechanical shear waves are encoded in the phase maps from which we can determine the tissue’s complex shear modulus.^{2} In MR electrical impedance tomography (MREIT), the perturbation is injected electric current whose density **J**(**r**) throughout the tissue is determined by the distributed electrical conductivity ** σ**(

**r**). The phase contrast encodes the magnetic field

**B**(

**r**) arising from

**J**(

**r**), whence we can determine

**(**

*σ***r**).

^{3}Sadleir

*et al.*proposed

^{4}and experimentally demonstrated

^{5}using MREIT to detect neural activity directly, exploring the conductivity changes that result from active membrane dynamics. It is therefore appropriate to focus our analytic power on models of the active membrane, which are highly nonlinear.

Most biologic—indeed, most engineering—phenomena are characterized by nonlinear differential equations, which most analytic methods cannot solve directly but instead must have the equations simplified through, e.g., linearization or dramatically restricting the parameter space, that is, the solutions are to problems different from the original modeling, curtailing their applicability. Sometimes such changes are perfectly sensible because the nonlinear behavior is not the salient detail under scrutiny, such as Rall and Agmon-Snir using the cable equation to model neurons to consider impulse transmission along large distances^{6} or how in the first study of this series Schwartz *et al.* held the membrane as purely resistive to consider a snapshot of electromagnetic field distribution throughout a bidomain.^{7} In this study, we will look at three increasingly complex models of membrane conduction. The first is the theta model by Ermentrout and Kopell.^{8,9} Using only one variable, this quadratic integrate and fire model describes bursting in *Aplysia* neurons. Next we consider the model by Fitzhugh^{10} and Nagumo *et al.*,^{11} which shows the all-or-nothing phenomenon of an action potential once the transmembrane voltage has surpassed a certain threshold. The final model is by Hindmarsh and Rose^{12} and adds a third variable that allows for the description of oscillatory burst discharge. The nonlinear nature of these models is key to the frontier problems of neural engineering, e.g., MREIT; hence, our solution method must include it.

Over the last 30 or so years, a rich scholarship comprising the broad field of homotopy analysis has been developed^{13} with the goal of a unified method for solving all sorts of mathematic models of the natural world—linear, nonlinear, deterministic, and stochastic—and which remains largely unexplored in the neural engineering community. Broadly, these techniques are recursive, starting with the initial or boundary conditions and creating an analytic series solution. Specifically in this work, we will use the decomposition method developed by Adomian^{14} but with modifications added later, which we will note. For convenience, we will simply refer to it as the Adomian decomposition method (ADM). The ADM creates a power series solution, called a decomposition series, which is equivalent to a Taylor series, by decomposing the unknown function into an infinite series, defining the first term as the initial condition. The nonlinear components are decomposed into their own infinite series called Adomian polynomials, which also start with the initial condition. Each term in the decomposition series depends on the one before it; hence, starting from the initial conditions, all terms are recursively calculated. The reviews by Adomian^{15} and Rach^{16} are both excellent primers on this method.

The ADM has gained some traction in the realms of biomedical phenomena, having been applied to nonlinear models of cellular population growth,^{17} cellular systems and aging,^{18} and infection diseases and immune response.^{19} Adomian^{20} and Adomian *et al.*^{21} discussed a limited time-and-space-dependent version of the FitzHugh–Nagumo model, which had only one state variable, as well as the Hodgkin–Huxley model,^{21} to describe impulse propagation down an axon. These studies are largely theoretic, serving more as proposals, without worked concrete examples, error analysis, or analytic continuation, all of which we address here. Furthermore, we will only consider models described by ordinary differential equations, neglecting spatial dependence.

## II. MATHEMATICAL PRELIMINARIES

### A. Operator notation

Consider a heterogeneous nonlinear ordinary differential equation $F{u(t)}=g(t)$ whose operator $F$ is the sum of linear $L$ and $R$ and nonlinear $N$ terms,

where $L=dndtn$ is the highest order differential operator, which we assume to be easily invertible, $R$ is the remaining part(s) of the linear operation, $N$ is the nonlinear operator, and *g* is a given function. Hereafter, we will neglect the curly brackets around arguments with only one variable. We assume that $L\u22121Lu=u\u2212\Phi $, where $L\u22121$ is the *n*-fold definite integral operator from 0 to *t*, whence we determine Φ by the initial value(s). Thus, if $L=ddt$, then $L\u22121=\u222b0tdt$ and Φ = *u*(0). Applying $L\u22121$ through (1) and solving for *u* yield

### B. Decomposition series

The ADM assumes that we can decompose the solution into an infinite series,^{14}

with the first term coming from the initial condition. The nonlinear terms $Nu$ are themselves decomposed into the Adomian polynomials,

which are defined as^{22}

See Appendix A for an example of the Mathematica code for generating *A*_{n}. The components of *u* are determined from the recursion scheme,^{23}

From Eq. (5), we see that *A*_{0} comes from the initial condition *u*_{0}, *A*_{1} from *u*_{0} and *u*_{1}, *A*_{2} from *u*_{0}, *u*_{1}, and *u*_{2}, and so on. Thus, all components in Eq. (3) are determined. The solution converges quickly,^{24} so in practice, the *m*-term partial sum will suffice, given here as

### C. One-step analytic continuation

The decomposition series *φ*_{m}(*t*) is equivalent to a generalized Taylor series^{25} about the function *u*_{0}(*t*)^{26} and whose radius of convergence *r* is insufficient to characterize, e.g., rapidly varying neural dynamics.^{27} The solution^{28,29} to this shortcoming—built into the ADM itself—is a self-starting, simple to use, and accurate one-step method of analytic continuation of *φ*_{m}(*t*) in terms of its family of elemental functions $\phi m(k)(t)$ with overlapping *r*^{(k)},

The domain of interest is partitioned into (variable) time intervals [0, *t*_{1}), [*t*_{1}, *t*_{2}), …, [*t*_{k−1}, *t*_{k}), each with its analytic continuant $\phi m(k)(t)$ of primitive $\phi m(0)(t)$. From the condition of continuity $\phi m(k+1)(tk+1)=\phi m(k)(tk+1)$, the values at the end of each element are the initial conditions for the following element.^{27} The resultant spline of *N* + 1 elements is conveniently expressed as^{30}

where Π is the boxcar function.^{31}^{,} Appendix B shows our code for generating the spline elements. Neither the intervals’ durations *h*_{k} = *t*_{k+1} − *t*_{k} nor orders *m* of their partial sums need to be the same.^{28,29} Indeed, the spline can be optimized by (1), fixing each *m* while varying the intervals’ duration from *h*_{k} = *λr*^{(k)} where the dilation constant *λ* < 1 and the radii of convergence are estimated by the *m*th coefficient of each $\phi m(k)(t)$ as $rm(k)=|am(k)|m\u22121$,^{32} or by (2), fixing *h*_{k} = *h* while varying *m* to be the smallest required to bring the interval’s maximum truncation error $\epsilon (k)=maxtk\u2264t\u2264tk+1|\phi m(k)(t)\u2212\phi m+1(k)(t)|$ below a set tolerance *ɛ*_{tol}, or both.^{33}

## III. ACTIVE MEMBRANE MODELS

### A. Ermentrout–Kopell model

Let us begin with a single variable model from Ermentrout and Kopell^{8,9} that can describe parabolic bursting behavior in *Aplysia* neurons. The theta model, as it is also known, is canonic for type I membrane dynamics—characterized by a wide range of firing frequencies^{34}—because all other type I models can be reduced thereto^{35,36} and is given as

where *θ* is the voltage, the constant *η* represents the input current, and *q* is the membrane time constant, which we can set to unity without loss of generality. When *η* > 0, via quadrature, the analytic solution in closed form is found to be^{34}

To approach the *θ* model with the ADM, let us re-arrange Eq. (10), conveniently isolating the nonlinear component,

We see that our operators are $L\theta =d\theta dt$, $N\theta =cos(\theta )$, *g* = 1 + *η*, and $R=0$, so Eq. (12) becomes

We write $\theta =\u2211n=0\u221e\theta n$ and $N\theta =\u2211n=0\u221eAncos(\theta )$, and, after applying $L\u22121=\u222b0tdt$ throughout, Eq. (13) becomes

From Eq. (6), each *θ*_{n} is found from our recursion scheme,

Thus, our decomposed *θ*(*t*) is fully determined, completing our analysis. Because this model has a known analytic solution in closed form, we can easily verify our solution by hand. The first few $Ancos(\theta )$ values are

so the first few terms of our decomposition series are

thus, we can see emerging the Taylor series representation of the *a priori* known solution $2arctan\eta tan(\eta t)$, which is expressed as^{37}

where *B* are the Bernoulli numbers.^{38}

### B. FitzHugh–Nagumo model

First devised as an oscillator^{10} and then as an equivalent circuit,^{11} the Fitzhugh–Nagumo model consists of two coupled nonlinear differential equations given as^{39}

where *V* is the transmembrane potential, *W* is the recovery variable, *σ* is the stimulating current, and *ϕ*, *β*, and *α* are positive constants. The operators of the *V* Eq. (19) are $LV=dVdt$, $R{V,W}=V\u2212W$, $NV=V3/3$, and *g*(*t*) = *σ* while the *W* Eq. (20) has $LW=dWdt$, $R{V,W}=\varphi (V\u2212\beta W)$, *g*(*t*) = *ϕα*, and no $N$ term. Let $V(t)=\u2211n=0\u221eVn$, $W(t)=\u2211n=0\u221eWn$, and $NV=13\u2211n=0\u221eAnV3$. When we apply $L\u22121$ to Eqs. (19) and (20) we respectively get

The recursion schemes are

and

However, for a few highly restricted cases, e.g., Demina and Kudryashov’s meromorphic,^{40} we have no *a priori* solution, so we must verify our results with the error $e=F{\phi m(t)}\u2212g(t)$, which comes from Eq. (1). If we write our *m*-term partial sums as $vm(t)=\u2211n=0m\u22121Vn(t)$ and $wm(t)=\u2211n=0m\u22121Wn(t)$, then the infinity norm of the error ‖*e*‖_{∞} from the *V* Eq. (19) and the *W* Eq. (20) (denoted by subscript) are, respectively, given as

Figure 1 shows the ‖*e*_{v}‖_{∞} for the primitive element *k* = 0 whose *r*^{(0)} = 1.8 for *λ* = 0.05, 0.1, and 0.2, and as expected, our solution converges more quickly as *λ* is decreased. Note that, by inspection, we can see that the error decreases linearly, which means that, since this is a logarithmic scale, the rates of convergence are exponential. The ‖*e*_{w}‖_{∞} (not shown) looks the same as ‖*e*_{v}‖_{∞} for all intervals, which is expected since their interval length *h*_{k} corresponds to the local *r*^{(k)}.

Let us now use our solution to construct splines (shown in Fig. 2) depicting an action potential using *v*_{10}(*t*) and *w*_{10}(*t*) and varying the interval length. We take the inputs (listed in Table I) from FitzHugh’s work.^{39} The bottom panel shows the full dynamic behavior of the two state variables. Right above it, we zoom into a sample of the *v*_{10}(*t*) spline to show the varying interval lengths. Above that, we zoom in even further to focus on just one element $v10(10)(t)$ to show how quickly the curve converges as we add terms.

Parameter . | Value . |
---|---|

Stimulating current, σ | 0.35 |

Interval dilate, λ | 0.25 |

Solution order, m | 10 |

Tunable constants | |

α | 0.7 |

β | 0.8 |

ϕ | 0.08 |

Initial conditions | |

$V(t)t=0$ | −1.1994 |

$W(t)t=0$ | −0.6243 |

Parameter . | Value . |
---|---|

Stimulating current, σ | 0.35 |

Interval dilate, λ | 0.25 |

Solution order, m | 10 |

Tunable constants | |

α | 0.7 |

β | 0.8 |

ϕ | 0.08 |

Initial conditions | |

$V(t)t=0$ | −1.1994 |

$W(t)t=0$ | −0.6243 |

### C. Hindmarsh–Rose model

Hindmarsh and Rose first modified the two state variable FitzHugh–Nagumo model of action potentials to allow for long interspike intervals to resemble the behavior of real neurons more closely.^{41} Later they added a third state variable, which allows for a qualitative description of bursting behavior,^{12} which we now consider. The model is given as

where *X* is the transmembrane potential, *Y* reflects the spiking Na^{+} and K^{+} currents, *Z* is an adaptive current that terminates them, thus creating an isolated burst of spikes, *I* is the stimulating current, and *a*, *b*, *c*, *d*, *r*, *s*, and *X*_{R} are constants. In all three equations of Eq. (24), we recognize $L=ddt$, and thus, $L\u22121=\u222b0tdt$. We decompose the variables $X(t)=\u2211n=0\u221eXn$, $Y(t)=\u2211n=0\u221eYn$, and $Z(t)=\u2211n=0\u221eZn$ as well as the nonlinear terms in Eq. (27)$NX=\u2211n=0\u221eAnX3+AnX2$ and Eq. (28)$NX=\u2211n=0\u221eAnX2$, and we apply $L\u22121$ throughout Eq. (24) to obtain

The recursion schemes are

and

Writing our *m*-term partial sums as $xm(t)=\u2211n=0m\u22121Xn(t)$, $ym(t)=\u2211n=0m\u22121Yn(t)$, and $zm(t)=\u2211n=0m\u22121Zn(t)$ means our ‖*e*‖_{∞} values are written as

Figure 3 shows the ‖*e*_{x}‖_{∞} at intervals *k* = 266 and 342 whose respective *r*^{(k)} = 3.5 and 0.57, respectively. Once again, by inspection, we can recognize exponential convergence rates from the (approximately) linear shape on the log scale. Furthermore, we see a faster convergence in the interval with the smaller radius. The ‖*e*_{y}‖_{∞} and ‖*e*_{z}‖_{∞} are similar (not shown).

Using inputs from the work by Hindmarsh and Rose^{12} (listed in Table II), we now construct splines (plotted in Fig. 4) that show a burst of spikes. This time, however, we hold the interval length fixed at *h* = 0.1 and allow the order of each element to vary, maintaining a truncation error at or below a set tolerance of *ɛ*_{tol}. In the top panel of Fig. 4, we zoom in to a section of the *y*_{m}(*t*) spline to see individual elements labeled with their respective orders.

Parameter . | Value . |
---|---|

Stimulating current, I | 1.5 |

Interval length, h | 0.1 |

Tolerance, ɛ_{tol} | 0.001 |

Tunable constants | |

a | 1 |

b | 3 |

c | 1 |

d | 5 |

r | 0.0021 |

s | 4 |

X_{R} | $\u221285$ |

Initial conditions | |

$X(t)t=0$ | −1.200 49 |

$Y(t)t=0$ | −6.270 14 |

$Z(t)t=0$ | 1.277 97 |

Parameter . | Value . |
---|---|

Stimulating current, I | 1.5 |

Interval length, h | 0.1 |

Tolerance, ɛ_{tol} | 0.001 |

Tunable constants | |

a | 1 |

b | 3 |

c | 1 |

d | 5 |

r | 0.0021 |

s | 4 |

X_{R} | $\u221285$ |

Initial conditions | |

$X(t)t=0$ | −1.200 49 |

$Y(t)t=0$ | −6.270 14 |

$Z(t)t=0$ | 1.277 97 |

## IV. COMPARISON WITH CARDINAL BASIS SPLINES

### A. Meromorphic FitzHugh–Nagumo model

Having shown how to construct spline solutions with the ADM (A-splines), we now validate this method through comparison to the method of collocation and cardinal basis splines (B-splines). For convenience, we will solve a version of the FitzHugh–Nagumo model (inputs in Table III),

for which we have the analytic solutions in closed form,^{40}

Parameter . | Value . |
---|---|

Stimulating current, σ | 0.35 |

Solution order, m | 3 |

Interval, T | 1 |

Knot spacing. δ | 3h |

Tunable constants | |

α | βσ |

β | 5 |

ϕ | $325$ |

Initial conditions | |

$V(t)t=0$ | $310$ |

$W(t)t=0$ | $720+1525$ |

Parameter . | Value . |
---|---|

Stimulating current, σ | 0.35 |

Solution order, m | 3 |

Interval, T | 1 |

Knot spacing. δ | 3h |

Tunable constants | |

α | βσ |

β | 5 |

ϕ | $325$ |

Initial conditions | |

$V(t)t=0$ | $310$ |

$W(t)t=0$ | $720+1525$ |

### B. B-spline collocation

We adopt the approach given by Pitolli;^{42} however, for a thorough treatment of B-spline theory and applications, please see the works by Schumaker^{43} and Prenter.^{44} Briefly, we partition △ the time interval [0, *T*] into uniformly spaced integers *l* (called knots) as △_{δ} ≔ {*lδ*, 0 ≤ *l* ≤ *N*}, where $N=T\delta $ and *δ* is the dilate for the knot time interval. We use spline curves for our approximating functions,

where *ν*_{l} and *ω*_{l} are unknown coefficients and $Blm(t)$ are the B-spline functions, piecewise polynomials of order *m* interleaved at the knots, recursively defined as^{45}

Our knot vector is equally spaced with *t* = 0 having multiplicity *m* + 1.^{46} The solution then involves solving for *ν*_{l} and *ω*_{l} through the method of collocation. We do this by substituting Eq. (42) into Eq. (39) to get

(summation over *l* ∈ [0, *N*] is understood) and evaluating our B-splines at collocation points, which are expressed as △_{τ} ≔ {*t*_{p} = *pτ*, 0 ≤ *p* ≤ *P*}, where $P=T\tau \u2264N$. We solve this nonlinear system of 2(*P* + 1) equations through 2(*m* + *N*) unknowns by the Gauss–Newton method on Mathematica 13.1 (Wolfram Research, Inc., Champaign, IL). A detailed description of the method can be found in Ref. 42.

### C. Example results

For our numeric example, we look at a small interval [0, 1] and use cubic splines, i.e., *m* = 3. All inputs are summarized in Table III. In this example, we want to show that the ADM is at least as accurate as the B-spline collocation method by comparing the infinity norm of the errors, which we define as *e*_{B} = *v* − *v*_{δ} and *e*_{A} = *v* − *v*_{m} at various discretizations $h=13\delta $. We list in Table IV the errors for an increasingly fine mesh. As expected, when the error goes down, the time steps are shorter. At all levels, the A-spline is around an order of magnitude more accurate than the B-spline approximation.

h
. | ‖e_{B}‖_{∞}
. | ‖e_{A}‖_{∞}
. |
---|---|---|

$16$ | 3.7751 × 10^{−6} | 5.0039 × 10^{−7} |

$112$ | 3.3928 × 10^{−7} | 6.8181 × 10^{−8} |

$124$ | 2.3888 × 10^{−8} | 8.8124 × 10^{−9} |

$148$ | 1.5699 × 10^{−9} | 1.1175 × 10^{−9} |

h
. | ‖e_{B}‖_{∞}
. | ‖e_{A}‖_{∞}
. |
---|---|---|

$16$ | 3.7751 × 10^{−6} | 5.0039 × 10^{−7} |

$112$ | 3.3928 × 10^{−7} | 6.8181 × 10^{−8} |

$124$ | 2.3888 × 10^{−8} | 8.8124 × 10^{−9} |

$148$ | 1.5699 × 10^{−9} | 1.1175 × 10^{−9} |

Finally, as a further check of the ADM, following the same steps in Sec. III B but this time with the meromorphic formulation and inputs, the first few terms of our decomposition series are

revealing the Taylor series expansions for our known solutions.

## V. CONCLUSION

The solutions we presented here represent a new way for neural engineers to approach key biophysical models. They are just a tiny fraction of the analytic possibilities with the ADM. For instance, besides analytic continuation, the solutions’ validity can also be expanded through, e.g., diagonal Padé approximates^{47} or iterated Shanks transforms.^{48} For the purposes of MREIT, generalizations worth considering include partial differential equations^{49} for impulse propagation, state variables describing multi-physics phenomena like the neural bilayer sonophore,^{50} and arbitrary derivative order^{51} to model non-ohmic conductivities in tissue.^{52}

## ACKNOWLEDGMENTS

This work was supported in part by a grant from the NIH (Grant No. R01NS077004) to R.J.S.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### Author Contributions

**B. L. Schwartz**: Conceptualization (lead); Data curation (lead); Formal analysis (lead); Investigation (equal); Software (equal); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). **S. M. Brown**: Conceptualization (supporting); Formal analysis (supporting); Software (supporting); Writing – original draft (supporting). **J. Muthuswamy**: Conceptualization (supporting); Investigation (supporting); Methodology (supporting); Project administration (supporting); Validation (supporting). **R. J. Sadleir**: Conceptualization (supporting); Formal analysis (supporting); Resources (lead); Software (equal); Supervision (lead); Validation (supporting); Writing – review & editing (equal).

## DATA AVAILABILITY

The data that support the findings of this study are available within the article.

### APPENDIX A: MATHEMATICA CODE FOR ADOMIAN POLYNOMIALS

Here now is some of our Mathematica codes written in the Wolfram language, which allows for 2D input, Greek letters, and symbolic operations. First is the code for calculating the Adomian polynomials defined by Eq. (5) for $Nv=v3$.

AP[m_]≔ Module[{F}, (*define the operation*)

f[*v*_]≔ *v*^{3};

F_{0} = f$\u2211k=0mvm\u22121,k[t]Sk$;

For $i$ = 0, *i* ≤ *m*, *i* + +,

A[i] = Expand[F_{i}/.S → 0];

F_{i+1} = $1i+1$D[F_{i},S]]]

AP[n] (*calculate n polynomials*)

### APPENDIX B: MATHEMATICA CODE FOR A-SPLINE

Finally, we present our code for the elements of a basic A-spline solution for the FitzHugh–Nagumo model given in Eqs. (19) and (20). For simplicity, the *K* intervals each have a constant duration *h* = *T* and polynomial order *m* = *M*.

*v*_{0,0}[*t*_] = −1.199 41; (*first terms from initial conditions*)

*w*_{0,0}[*t*_] = −0.6243;

For$k$ = 1, *k* ≤ *K*, *k* = *k*+1, (*intervals loop*)

*v*_{k−1,1}[*t*_] = *σ t*+$\u222b0tvk\u22121,0[t]$–$13A[0]$–$wk\u22121,0[t]dt$;

*w*_{k−1,1}[*t*_] = *ϕαt*+$\varphi \u222b0tvk\u22121,0[t]$–$\beta wk\u22121,0[t]dt$;

For $m$ = 1, *m* ≤ *M*, *m* = *m* + 1, (*series loop*)

*v*_{k−1,m}[*t*_] = $\u222b0tvk\u22121,m\u22121[t]$–$13Am$–1]−$wk\u22121,m\u22121[t]dt$;

*w*_{k−1,m}[*t*_] = $\varphi \u222b0tvk\u22121,m\u22121[t]$–$\beta wk\u22121,m\u22121[t]dt$;]

*v*_{k,0}[*t*_] = $\u2211m=0M(vk\u22121,m[T])$; (*end of each interval*)

*w*_{k,0}[*t*_] = $\u2211m=0M(wk\u22121,m[T])$;]