Computational modeling of neuroactivity plays a central role in our effort to understand brain dynamics in the advancements of neural engineering such as deep brain stimulation, neuroprosthetics, and magnetic resonance electrical impedance tomography. However, analytic solutions do not capture the fundamental nonlinear behavior of an action potential. What is needed is a method that is not constrained to only linearized models of neural tissue. Therefore, the objective of this study is to establish a robust, straightforward process for modeling neurodynamic phenomena, which preserves their nonlinear features. To address this, we turn to decomposition methods from homotopy analysis, which have emerged in recent decades as powerful tools for solving nonlinear differential equations. We solve the nonlinear ordinary differential equations of three landmark models of neural conduction—Ermentrout–Kopell, FitzHugh–Nagumo, and Hindmarsh–Rose models—using George Adomian’s decomposition method. For each variable, we construct a power series solution equivalent to a generalized Taylor series expanded about a function. The first term of the decomposition series comes from the models’ initial conditions. All subsequent terms are recursively determined from the first. We show rapid convergence, achieving a maximal error of <1012 with only eight terms. We extend the region of convergence with one-step analytic continuation so that our complete solutions are decomposition splines. We show that this process can yield solutions for single- and multi-variable models and can characterize a single action potential or complex bursting patterns. Finally, we show that the accuracy of this decomposition approach favorably compares to an established polynomial method, B-spline collocation. The strength of this method, besides its stability and ease of computation, is that, unlike perturbation, we make no changes to the models’ equations; thus, our solutions are to the problems at hand, not simplified versions. This work validates decomposition as a viable technique for advanced neural engineering studies.

New advanced MRI techniques allow for in vivo quantitative inquiries of properties of tissue by perturbing it in synchrony with the imaging pulse sequence such that its response is encoded in the phase component of the complex magnetic resonance (MR) signal.1 From the phase map, we can then calculate the key parameters that can serve as biomarkers for tissue health. For example, in brain MR elastography, the tissue is vibrated, and the mechanical shear waves are encoded in the phase maps from which we can determine the tissue’s complex shear modulus.2 In MR electrical impedance tomography (MREIT), the perturbation is injected electric current whose density J(r) throughout the tissue is determined by the distributed electrical conductivity σ(r). The phase contrast encodes the magnetic field B(r) arising from J(r), whence we can determine σ(r).3 Sadleir et al. proposed4 and experimentally demonstrated5 using MREIT to detect neural activity directly, exploring the conductivity changes that result from active membrane dynamics. It is therefore appropriate to focus our analytic power on models of the active membrane, which are highly nonlinear.

Most biologic—indeed, most engineering—phenomena are characterized by nonlinear differential equations, which most analytic methods cannot solve directly but instead must have the equations simplified through, e.g., linearization or dramatically restricting the parameter space, that is, the solutions are to problems different from the original modeling, curtailing their applicability. Sometimes such changes are perfectly sensible because the nonlinear behavior is not the salient detail under scrutiny, such as Rall and Agmon-Snir using the cable equation to model neurons to consider impulse transmission along large distances6 or how in the first study of this series Schwartz et al. held the membrane as purely resistive to consider a snapshot of electromagnetic field distribution throughout a bidomain.7 In this study, we will look at three increasingly complex models of membrane conduction. The first is the theta model by Ermentrout and Kopell.8,9 Using only one variable, this quadratic integrate and fire model describes bursting in Aplysia neurons. Next we consider the model by Fitzhugh10 and Nagumo et al.,11 which shows the all-or-nothing phenomenon of an action potential once the transmembrane voltage has surpassed a certain threshold. The final model is by Hindmarsh and Rose12 and adds a third variable that allows for the description of oscillatory burst discharge. The nonlinear nature of these models is key to the frontier problems of neural engineering, e.g., MREIT; hence, our solution method must include it.

Over the last 30 or so years, a rich scholarship comprising the broad field of homotopy analysis has been developed13 with the goal of a unified method for solving all sorts of mathematic models of the natural world—linear, nonlinear, deterministic, and stochastic—and which remains largely unexplored in the neural engineering community. Broadly, these techniques are recursive, starting with the initial or boundary conditions and creating an analytic series solution. Specifically in this work, we will use the decomposition method developed by Adomian14 but with modifications added later, which we will note. For convenience, we will simply refer to it as the Adomian decomposition method (ADM). The ADM creates a power series solution, called a decomposition series, which is equivalent to a Taylor series, by decomposing the unknown function into an infinite series, defining the first term as the initial condition. The nonlinear components are decomposed into their own infinite series called Adomian polynomials, which also start with the initial condition. Each term in the decomposition series depends on the one before it; hence, starting from the initial conditions, all terms are recursively calculated. The reviews by Adomian15 and Rach16 are both excellent primers on this method.

The ADM has gained some traction in the realms of biomedical phenomena, having been applied to nonlinear models of cellular population growth,17 cellular systems and aging,18 and infection diseases and immune response.19 Adomian20 and Adomian et al.21 discussed a limited time-and-space-dependent version of the FitzHugh–Nagumo model, which had only one state variable, as well as the Hodgkin–Huxley model,21 to describe impulse propagation down an axon. These studies are largely theoretic, serving more as proposals, without worked concrete examples, error analysis, or analytic continuation, all of which we address here. Furthermore, we will only consider models described by ordinary differential equations, neglecting spatial dependence.

We will start with a brief description of the ADM in Sec. II, continue with detailed analyses of our three models in Sec. III, compare the Adomian spline solutions to cardinal basis spline solutions in Sec. IV, and end with remarks on future directions.

Consider a heterogeneous nonlinear ordinary differential equation F{u(t)}=g(t) whose operator F is the sum of linear L and R and nonlinear N terms,

F{u}=L{u}+R{u}+N{u}=g,
(1)

where L=dndtn is the highest order differential operator, which we assume to be easily invertible, R is the remaining part(s) of the linear operation, N is the nonlinear operator, and g is a given function. Hereafter, we will neglect the curly brackets around arguments with only one variable. We assume that L1Lu=uΦ, where L1 is the n-fold definite integral operator from 0 to t, whence we determine Φ by the initial value(s). Thus, if L=ddt, then L1=0tdt and Φ = u(0). Applying L1 through (1) and solving for u yield

u=Φ+L1gL1RuL1Nu.
(2)

The ADM assumes that we can decompose the solution into an infinite series,14 

u=n=0un,
(3)

with the first term coming from the initial condition. The nonlinear terms Nu are themselves decomposed into the Adomian polynomials,

Nu=n=0An,
(4)

which are defined as22 

An=1n!dndζnNn=0unζnζ=0.
(5)

See  Appendix A for an example of the Mathematica code for generating An. The components of u are determined from the recursion scheme,23 

u0=Φ,u1=L1gL1Ru0L1A0,un+2=L1Run+1L1An+1.
(6)

From Eq. (5), we see that A0 comes from the initial condition u0, A1 from u0 and u1, A2 from u0, u1, and u2, and so on. Thus, all components in Eq. (3) are determined. The solution converges quickly,24 so in practice, the m-term partial sum will suffice, given here as

φm(t)=n=0m1un(t);limmφm(t)=u(t).
(7)

The decomposition series φm(t) is equivalent to a generalized Taylor series25 about the function u0(t)26 and whose radius of convergence r is insufficient to characterize, e.g., rapidly varying neural dynamics.27 The solution28,29 to this shortcoming—built into the ADM itself—is a self-starting, simple to use, and accurate one-step method of analytic continuation of φm(t) in terms of its family of elemental functions φm(k)(t) with overlapping r(k),

φm(t)=φm(0)(t);t0r(0)<t<t0+r(0),φm(t)=φm(1)(t);t1r(1)<t<t1+r(1),φm(t)=φm(k)(t);tkr(k)<t<tk+r(k).
(8)

The domain of interest is partitioned into (variable) time intervals [0, t1), [t1, t2), …, [tk−1, tk), each with its analytic continuant φm(k)(t) of primitive φm(0)(t). From the condition of continuity φm(k+1)(tk+1)=φm(k)(tk+1), the values at the end of each element are the initial conditions for the following element.27 The resultant spline of N + 1 elements is conveniently expressed as30 

φm(t)=k=0Nφm(k)(t)Π(t;tk,tk+1),
(9)

where Π is the boxcar function.31, Appendix B shows our code for generating the spline elements. Neither the intervals’ durations hk = tk+1tk nor orders m of their partial sums need to be the same.28,29 Indeed, the spline can be optimized by (1), fixing each m while varying the intervals’ duration from hk = λr(k) where the dilation constant λ < 1 and the radii of convergence are estimated by the mth coefficient of each φm(k)(t) as rm(k)=|am(k)|m1,32 or by (2), fixing hk = h while varying m to be the smallest required to bring the interval’s maximum truncation error ε(k)=maxtkttk+1|φm(k)(t)φm+1(k)(t)| below a set tolerance ɛtol, or both.33 

Let us begin with a single variable model from Ermentrout and Kopell8,9 that can describe parabolic bursting behavior in Aplysia neurons. The theta model, as it is also known, is canonic for type I membrane dynamics—characterized by a wide range of firing frequencies34—because all other type I models can be reduced thereto35,36 and is given as

dθdt=q(1cos(θ))+1+cos(θ)η,θ[0,2π],θ(0)=θ(2π)=0,
(10)

where θ is the voltage, the constant η represents the input current, and q is the membrane time constant, which we can set to unity without loss of generality. When η > 0, via quadrature, the analytic solution in closed form is found to be34 

θ(t)=2arctanηtan(ηt).
(11)

To approach the θ model with the ADM, let us re-arrange Eq. (10), conveniently isolating the nonlinear component,

dθdt=1+η+(η1)cos(θ).
(12)

We see that our operators are Lθ=dθdt, Nθ=cos(θ), g = 1 + η, and R=0, so Eq. (12) becomes

Lθ=g+(η1)Nθ.
(13)

We write θ=n=0θn and Nθ=n=0Ancos(θ), and, after applying L1=0tdt throughout, Eq. (13) becomes

θ(t)=n=0θn=θ(0)+(1+η)t+(η1)n=0L1Ancos(θ).
(14)

From Eq. (6), each θn is found from our recursion scheme,

θ0=θ(0),θ1=(1+η)t+(η1)L1A0cos(θ),θn+2=(η1)L1An+1cos(θ).
(15)

Thus, our decomposed θ(t) is fully determined, completing our analysis. Because this model has a known analytic solution in closed form, we can easily verify our solution by hand. The first few Ancos(θ) values are

A0=cos(θ0),A1=sin(θ0)θ1,A2=cos(θ0)θ12sin(θ0)θ2,
(16)

so the first few terms of our decomposition series are

θ1=2ηt,θ2=0,θ3=23η2η3t3,
(17)

thus, we can see emerging the Taylor series representation of the a priori known solution 2arctanηtan(ηt), which is expressed as37 

θ(t)=2j=0(1)j2j+1ηk=1(1)k122k122kB2k(2k)!(ηt)2k12j+1,
(18)

where B are the Bernoulli numbers.38 

First devised as an oscillator10 and then as an equivalent circuit,11 the Fitzhugh–Nagumo model consists of two coupled nonlinear differential equations given as39 

dVdt=VV33W+σ,
(19)
dWdt=ϕV+αβW,
(20)

where V is the transmembrane potential, W is the recovery variable, σ is the stimulating current, and ϕ, β, and α are positive constants. The operators of the V Eq. (19) are LV=dVdt, R{V,W}=VW, NV=V3/3, and g(t) = σ while the W Eq. (20) has LW=dWdt, R{V,W}=ϕ(VβW), g(t) = ϕα, and no N term. Let V(t)=n=0Vn, W(t)=n=0Wn, and NV=13n=0AnV3. When we apply L1 to Eqs. (19) and (20) we respectively get

V(t)=V(0)+σt+L1n=0VnAnV33Wn,
(21)
W(t)=W(0)+ϕαt+ϕL1n=0VnβWn.
(22)

The recursion schemes are

V0=V(0),V1=σt+L1V0A0V33W0,Vn+2=L1Vn+1An+1V33Wn+1,
(23)

and

W0=W(0),W1=ϕαt+ϕL1V0βW0,Wn+2=ϕL1Vn+1βWn+1.
(24)

However, for a few highly restricted cases, e.g., Demina and Kudryashov’s meromorphic,40 we have no a priori solution, so we must verify our results with the error e=F{φm(t)}g(t), which comes from Eq. (1). If we write our m-term partial sums as vm(t)=n=0m1Vn(t) and wm(t)=n=0m1Wn(t), then the infinity norm of the error ‖e from the V Eq. (19) and the W Eq. (20) (denoted by subscript) are, respectively, given as

ev=maxtkttk+1dvm(t)dtvm(t)+vm(t)33+wm(t)σ,
(25)
ew=maxtkttk+1dwm(t)dtϕvm(t)+αβwm(t).
(26)

Figure 1 shows the ‖ev for the primitive element k = 0 whose r(0) = 1.8 for λ = 0.05, 0.1, and 0.2, and as expected, our solution converges more quickly as λ is decreased. Note that, by inspection, we can see that the error decreases linearly, which means that, since this is a logarithmic scale, the rates of convergence are exponential. The ‖ew (not shown) looks the same as ‖ev for all intervals, which is expected since their interval length hk corresponds to the local r(k).

FIG. 1.

Maximum error from Eq. (25) as a function of m over the interval size h0 = λr(0) where r(0) = 1.8.

FIG. 1.

Maximum error from Eq. (25) as a function of m over the interval size h0 = λr(0) where r(0) = 1.8.

Close modal

Let us now use our solution to construct splines (shown in Fig. 2) depicting an action potential using v10(t) and w10(t) and varying the interval length. We take the inputs (listed in Table I) from FitzHugh’s work.39 The bottom panel shows the full dynamic behavior of the two state variables. Right above it, we zoom into a sample of the v10(t) spline to show the varying interval lengths. Above that, we zoom in even further to focus on just one element v10(10)(t) to show how quickly the curve converges as we add terms.

FIG. 2.

(Bottom) The splines of v10(t) and w10(t) over the course of one action potential. The modeling inputs are listed in Table I. (Middle) A zoomed in view of v10(t) where the light blue vertical lines separate the spline’s individual elements. A few of the intervals are labeled. (Top) A zoomed in view of v10(10)(t) where 1, 2, and 3 term partial sum solutions are also plotted. The vertical axes’ units are mV.

FIG. 2.

(Bottom) The splines of v10(t) and w10(t) over the course of one action potential. The modeling inputs are listed in Table I. (Middle) A zoomed in view of v10(t) where the light blue vertical lines separate the spline’s individual elements. A few of the intervals are labeled. (Top) A zoomed in view of v10(10)(t) where 1, 2, and 3 term partial sum solutions are also plotted. The vertical axes’ units are mV.

Close modal
TABLE I.

FitzHugh–Nagumo modeling inputs.

ParameterValue
Stimulating current, σ 0.35 
Interval dilate, λ 0.25 
Solution order, m 10 
Tunable constants 
α 0.7 
β 0.8 
ϕ 0.08 
Initial conditions 
V(t)t=0 −1.1994 
W(t)t=0 −0.6243 
ParameterValue
Stimulating current, σ 0.35 
Interval dilate, λ 0.25 
Solution order, m 10 
Tunable constants 
α 0.7 
β 0.8 
ϕ 0.08 
Initial conditions 
V(t)t=0 −1.1994 
W(t)t=0 −0.6243 

Hindmarsh and Rose first modified the two state variable FitzHugh–Nagumo model of action potentials to allow for long interspike intervals to resemble the behavior of real neurons more closely.41 Later they added a third state variable, which allows for a qualitative description of bursting behavior,12 which we now consider. The model is given as

dXdt=YaX3+bX2Z+I,
(27)
dYdt=cdX2Y,
(28)
dZdt=r(s(XXR)Z),
(29)

where X is the transmembrane potential, Y reflects the spiking Na+ and K+ currents, Z is an adaptive current that terminates them, thus creating an isolated burst of spikes, I is the stimulating current, and a, b, c, d, r, s, and XR are constants. In all three equations of Eq. (24), we recognize L=ddt, and thus, L1=0tdt. We decompose the variables X(t)=n=0Xn, Y(t)=n=0Yn, and Z(t)=n=0Zn as well as the nonlinear terms in Eq. (27)NX=n=0AnX3+AnX2 and Eq. (28)NX=n=0AnX2, and we apply L1 throughout Eq. (24) to obtain

X(t)=n=0Xn=X(0)+It+L1n=0ynaAnX3+bAnX2Zn,
(30)
Y(t)=n=0Yn=Y(0)+ctL1n=0dAnX2+Yn,
(31)
Z(t)=n=0Zn=Z(0)rsXRt+L1n=0rsXnrZn.
(32)

The recursion schemes are

X0=X(0),X1=It+L1Y0aA0X3+bA0X2Z0,Xn+2=L1Yn+1aAn+1X3+bAn+1X2Zn+1,
(33)
Y0=Y(0),Y1=ctL1dA0X2+Y0,Yn+2=L1dAn+1X2+Yn+1,
(34)

and

Z0=Z(0),Z1=rsXRt+L1rsX0rZ0,Zn+2=L1rsXn+1rZn+1.
(35)

Writing our m-term partial sums as xm(t)=n=0m1Xn(t), ym(t)=n=0m1Yn(t), and zm(t)=n=0m1Zn(t) means our ‖e values are written as

ex=maxtkttk+1dxm(t)dtym(t)+axm3(t) bxm2(t)+zm(t)I
(36)
ey=maxtkttk+1dym(t)dtc+dxm2(t)+ym(t),
(37)
ez=maxtkttk+1dzm(t)dtr(s(xm(t)XR)zm(t)).
(38)

Figure 3 shows the ‖ex at intervals k = 266 and 342 whose respective r(k) = 3.5 and 0.57, respectively. Once again, by inspection, we can recognize exponential convergence rates from the (approximately) linear shape on the log scale. Furthermore, we see a faster convergence in the interval with the smaller radius. The ‖ey and ‖ez are similar (not shown).

FIG. 3.

Maximum error for the partial sum solutions from Eq. (36) as a function of m over the intervals k = 266 and 342 whose respective radii of convergence are 3.5 and 0.57, respectively.

FIG. 3.

Maximum error for the partial sum solutions from Eq. (36) as a function of m over the intervals k = 266 and 342 whose respective radii of convergence are 3.5 and 0.57, respectively.

Close modal

Using inputs from the work by Hindmarsh and Rose12 (listed in Table II), we now construct splines (plotted in Fig. 4) that show a burst of spikes. This time, however, we hold the interval length fixed at h = 0.1 and allow the order of each element to vary, maintaining a truncation error at or below a set tolerance of ɛtol. In the top panel of Fig. 4, we zoom in to a section of the ym(t) spline to see individual elements labeled with their respective orders.

TABLE II.

Hindmarsh–Rose modeling inputs.

ParameterValue
Stimulating current, I 1.5 
Interval length, h 0.1 
Tolerance, ɛtol 0.001 
Tunable constants 
a 
b 
c 
d 
r 0.0021 
s 
XR 85 
Initial conditions 
X(t)t=0 −1.200 49 
Y(t)t=0 −6.270 14 
Z(t)t=0 1.277 97 
ParameterValue
Stimulating current, I 1.5 
Interval length, h 0.1 
Tolerance, ɛtol 0.001 
Tunable constants 
a 
b 
c 
d 
r 0.0021 
s 
XR 85 
Initial conditions 
X(t)t=0 −1.200 49 
Y(t)t=0 −6.270 14 
Z(t)t=0 1.277 97 
FIG. 4.

(Bottom) The splines of xm(t), ym(t), and zm(t) over the course of a three spike burst. The spline interval length h = 0.1, and the truncation error tolerance is ɛtol = 0.001. (Top) A zoomed in view of ym(t). Since the time interval is fixed, each element corresponds to an easily calculated time. The vertical axes’ units are mV.

FIG. 4.

(Bottom) The splines of xm(t), ym(t), and zm(t) over the course of a three spike burst. The spline interval length h = 0.1, and the truncation error tolerance is ɛtol = 0.001. (Top) A zoomed in view of ym(t). Since the time interval is fixed, each element corresponds to an easily calculated time. The vertical axes’ units are mV.

Close modal

Having shown how to construct spline solutions with the ADM (A-splines), we now validate this method through comparison to the method of collocation and cardinal basis splines (B-splines). For convenience, we will solve a version of the FitzHugh–Nagumo model (inputs in Table III),

dvdt=vv3w+σ,dwdt=ϕv+αβw,
(39)

for which we have the analytic solutions in closed form,40 

v(t)=710+15e25t+110tanh45,
(40)
w(t)=410+25σ7+2e25t+tanh45e45t10257+2e25t+tanh45.
(41)
TABLE III.

Meromorphic FitzHugh–Nagumo modeling inputs.

ParameterValue
Stimulating current, σ 0.35 
Solution order, m 
Interval, T 
Knot spacing. δ 3h 
Tunable constants 
α βσ 
β 
ϕ 325 
Initial conditions 
V(t)t=0 310 
W(t)t=0 720+1525 
ParameterValue
Stimulating current, σ 0.35 
Solution order, m 
Interval, T 
Knot spacing. δ 3h 
Tunable constants 
α βσ 
β 
ϕ 325 
Initial conditions 
V(t)t=0 310 
W(t)t=0 720+1525 

We adopt the approach given by Pitolli;42 however, for a thorough treatment of B-spline theory and applications, please see the works by Schumaker43 and Prenter.44 Briefly, we partition △ the time interval [0, T] into uniformly spaced integers l (called knots) as △δ ≔ {, 0 ≤ lN}, where N=Tδ and δ is the dilate for the knot time interval. We use spline curves for our approximating functions,

vδ(t)=l=0NνlBlm(t),wδ(t)=l=0NωlBlm(t),
(42)

where νl and ωl are unknown coefficients and Blm(t) are the B-spline functions, piecewise polynomials of order m interleaved at the knots, recursively defined as45 

Bl0(t)=1,tlttl+1,0,otherwise,m=0,
(43)
Blm(t)=ttltl+mtlBlm1(t)+tl+m+1ttl+m+1tl+1Bl+1m1(t),m1.
(44)

Our knot vector is equally spaced with t = 0 having multiplicity m + 1.46 The solution then involves solving for νl and ωl through the method of collocation. We do this by substituting Eq. (42) into Eq. (39) to get

νlddtBlm(t)=νlBlm(t)νlBlm(t)3ωlBlm(t)+σ,ωlddtBlm(t)=ϕνlBlm(t)βωlBlm(t)+ϕα,
(45)

(summation over l ∈ [0, N] is understood) and evaluating our B-splines at collocation points, which are expressed as △τ ≔ {tp = , 0 ≤ pP}, where P=TτN. We solve this nonlinear system of 2(P + 1) equations through 2(m + N) unknowns by the Gauss–Newton method on Mathematica 13.1 (Wolfram Research, Inc., Champaign, IL). A detailed description of the method can be found in Ref. 42.

For our numeric example, we look at a small interval [0, 1] and use cubic splines, i.e., m = 3. All inputs are summarized in Table III. In this example, we want to show that the ADM is at least as accurate as the B-spline collocation method by comparing the infinity norm of the errors, which we define as eB = vvδ and eA = vvm at various discretizations h=13δ. We list in Table IV the errors for an increasingly fine mesh. As expected, when the error goes down, the time steps are shorter. At all levels, the A-spline is around an order of magnitude more accurate than the B-spline approximation.

TABLE IV.

The maximum error for the B and A splines.

heBeA
16 3.7751 × 10−6 5.0039 × 10−7 
112 3.3928 × 10−7 6.8181 × 10−8 
124 2.3888 × 10−8 8.8124 × 10−9 
148 1.5699 × 10−9 1.1175 × 10−9 
heBeA
16 3.7751 × 10−6 5.0039 × 10−7 
112 3.3928 × 10−7 6.8181 × 10−8 
124 2.3888 × 10−8 8.8124 × 10−9 
148 1.5699 × 10−9 1.1175 × 10−9 

Finally, as a further check of the ADM, following the same steps in Sec. III B but this time with the meromorphic formulation and inputs, the first few terms of our decomposition series are

vm(t)=310t1010+t24010+,wm(t)=35100410+3t251021t250010+,
(46)

revealing the Taylor series expansions for our known solutions.

The solutions we presented here represent a new way for neural engineers to approach key biophysical models. They are just a tiny fraction of the analytic possibilities with the ADM. For instance, besides analytic continuation, the solutions’ validity can also be expanded through, e.g., diagonal Padé approximates47 or iterated Shanks transforms.48 For the purposes of MREIT, generalizations worth considering include partial differential equations49 for impulse propagation, state variables describing multi-physics phenomena like the neural bilayer sonophore,50 and arbitrary derivative order51 to model non-ohmic conductivities in tissue.52 

This work was supported in part by a grant from the NIH (Grant No. R01NS077004) to R.J.S.

The authors have no conflicts to disclose.

B. L. Schwartz: Conceptualization (lead); Data curation (lead); Formal analysis (lead); Investigation (equal); Software (equal); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). S. M. Brown: Conceptualization (supporting); Formal analysis (supporting); Software (supporting); Writing – original draft (supporting). J. Muthuswamy: Conceptualization (supporting); Investigation (supporting); Methodology (supporting); Project administration (supporting); Validation (supporting). R. J. Sadleir: Conceptualization (supporting); Formal analysis (supporting); Resources (lead); Software (equal); Supervision (lead); Validation (supporting); Writing – review & editing (equal).

The data that support the findings of this study are available within the article.

Here now is some of our Mathematica codes written in the Wolfram language, which allows for 2D input, Greek letters, and symbolic operations. First is the code for calculating the Adomian polynomials defined by Eq. (5) for Nv=v3.

AP[m_]≔ Module[{F}, (*define the operation*)

f[v_]≔ v3;

F0 = fk=0mvm1,k[t]Sk;

For i = 0, im, i + +,

A[i] = Expand[Fi/.S → 0];

Fi+1 = 1i+1D[Fi,S]]]

AP[n] (*calculate n polynomials*)

Finally, we present our code for the elements of a basic A-spline solution for the FitzHugh–Nagumo model given in Eqs. (19) and (20). For simplicity, the K intervals each have a constant duration h = T and polynomial order m = M.

v0,0[t_] = −1.199 41; (*first terms from initial conditions*)

w0,0[t_] = −0.6243;

Fork = 1, kK, k = k+1, (*intervals loop*)

vk−1,1[t_] = σ t+0tvk1,0[t]13A[0]wk1,0[t]dt;

wk−1,1[t_] = ϕαt+ϕ0tvk1,0[t]βwk1,0[t]dt;

For m = 1, mM, m = m + 1, (*series loop*)

vk−1,m[t_] = 0tvk1,m1[t]13Am–1]−wk1,m1[t]dt;

wk−1,m[t_] = ϕ0tvk1,m1[t]βwk1,m1[t]dt;]

vk,0[t_] = m=0M(vk1,m[T]); (*end of each interval*)

wk,0[t_] = m=0M(wk1,m[T]);]

1.
D. T.
Wymer
,
K. P.
Patel
,
W. F.
Burke
 III
, and
V. K.
Bhatia
, “
Phase-contrast MRI: Physics, techniques, and clinical applications
,”
Radiographics
40
,
122
140
(
2020
).
2.
M. C.
Murphy
,
J.
Huston
 III
, and
R. L.
Ehman
, “
MR elastography of the brain and its application in neurological diseases
,”
Neuroimage
187
,
176
183
(
2019
).
3.
J. K.
Seo
and
E. J.
Woo
, “
Magnetic resonance electrical impedance tomography (MREIT)
,”
SIAM Rev.
53
,
40
68
(
2011
).
4.
R. J.
Sadleir
,
S. C.
Grant
, and
E. J.
Woo
, “
Can high-field MREIT be used to directly detect neural activity? Theoretical considerations
,”
NeuroImage
52
,
205
216
(
2010
).
5.
R. J.
Sadleir
,
F.
Fu
,
C.
Falgas
,
S.
Holland
,
M.
Boggess
,
S. C.
Grant
, and
E. J.
Woo
, “
Direct detection of neural activity in vitro using magnetic resonance electrical impedance tomography (MREIT)
,”
NeuroImage
161
,
104
119
(
2017
).
6.
W.
Rall
and
H.
Agmon-Snir
, “
Cable theory for dendritic neurons
,” in
Methods in Neuronal Modeling: From Ions to Networks
, 2nd ed., edited by
C.
Koch
and
I.
Segev
(
MIT Press
,
Cambridge
,
1988
), pp.
27
92
.
7.
B. L.
Schwartz
,
M.
Chauhan
, and
R. J.
Sadleir
, “
Analytic modeling of neural tissue: I. A spherical bidomain
,”
J. Math. Neurosci.
6
,
1
20
(
2016
).
8.
G. B.
Ermentrout
and
N.
Kopell
, “
Parabolic bursting in an excitable system coupled with a slow oscillation
,”
SIAM J. Appl. Math.
46
,
233
253
(
1986
).
9.
N.
Kopell
and
G. B.
Ermentrout
, “
Subcellular oscillations and bursting
,”
Math. Biosci.
78
,
265
291
(
1986
).
10.
R.
Fitzhugh
, “
Impulses and physiological states in theoretical models of nerve membrane
,”
Biophys. J.
1
,
445
466
(
1961
).
11.
J.
Nagumo
,
S.
Arimoto
, and
S.
Yoshizawa
, “
An active pulse transmission line simulating nerve axon
,”
Proc. IRE
50
,
2061
2070
(
1962
).
12.
J. L.
Hindmarsh
and
R. M.
Rose
, “
A model of neuronal bursting using three coupled first order differential equations
,”
Proc. R. Soc. B
221
,
87
102
(
1984
).
13.
T.
Öziş
and
A.
Yıldırım
, “
Comparison between Adomian’s method and He’s homotopy perturbation method
,”
Comput. Math. Appl.
56
,
1216
1224
(
2008
).
14.
G.
Adomian
,
Solving Frontier Problems of Physics: The Decomposition Method
(
Klywer Academic Publishers
,
Norwell, MA
,
1994
), pp.
6
20
.
15.
G.
Adomian
, “
A review of the decomposition method and some recent results for nonlinear equations
,”
Comput. Math. Appl.
21
,
101
127
(
1991
).
16.
R.
Rach
, “
A bibliography of the theory and applications of the Adomian decomposition method, 1961–2011
,”
Kybernetes
41
,
1087
1148
(
2012
).
17.
G.
Adomian
,
G. E.
Adomian
, and
R. E.
Bellman
, “
Biological system interactions
,”
Proc. Natl. Acad. Sci. U. S. A.
81
,
2938
2940
(
1984
).
18.
G.
Adomian
and
G. E.
Adomian
, “
Cellular systems and aging models
,”
Comput. Math. Appl.
11
,
283
291
(
1985
).
19.
G.
Adomian
and
G. E.
Adomian
, “
Solution of the Marchuk model of infections disease and immune response
,”
Math. Modell.
7
,
803
807
(
1986
).
20.
G.
Adomian
, “
Solving the mathematical models of neurosciences and medicine
,”
Math. Comput. Simul.
40
,
107
114
(
1995
).
21.
G.
Adomian
,
M.
Witten
, and
G. E.
Adomian
, “
A new approach to the solution of neurological models: Application to the Hodgkin Huxley and the FitzHugh Nagumo equations
,” in
Stochastic Processes and Their Applications
, edited by
M. J.
Beckmann
,
M. N.
Gopalan
, and
R.
Subramanian
(
Springer
,
Berlin
,
1991
), pp.
99
113
.
22.
R.
Rach
, “
A convenient computational form for the Adomian polynomials
,”
J. Math. Anal. Appl.
102
,
415
419
(
1984
).
23.
A.-M.
Wazwaz
, “
A reliable modification of Adomian decomposition method
,”
Appl. Math. Comput.
102
,
77
86
(
1999
).
24.
Y.
Cherruault
and
G.
Adomian
, “
Decomposition methods: A new proof of convergence
,”
Math. Comput. Modell.
18
,
103
106
(
1993
).
25.
P.
Dennery
and
A.
Krzywicki
,
Mathematics for Physicists
(
Dover Publications, Inc.
,
Mineola, NY
,
1996
), pp.
45
47
.
26.
Y.
Cherruault
,
G.
Adomian
,
K.
Abbaoui
, and
R.
Rach
, “
Further remarks on convergence of decomposition method
,”
Int. J. BioMed. Comput.
38
,
89
93
(
1995
).
27.
A.
Rèpaci
, “
Nonlinear dynamical systems: On the accuracy of Adomian’s decomposition method
,”
Appl. Math. Lett.
3
,
35
39
(
1990
).
28.
G.
Adomian
,
R.
Rach
, and
R.
Meyers
, “
Numerical algorithms and decomposition
,”
Comput. Math. Appl.
22
,
57
61
(
1991
).
29.
G.
Adomian
,
R. C.
Rach
, and
R. E.
Meyers
, “
Numerical integration, analytic continuation, and decomposition
,”
Appl. Math. Comput.
88
,
95
116
(
1997
).
30.
J.-S.
Duan
,
R.
Rach
,
A.-M.
Wazwaz
,
T.
Chaolu
, and
Z.
Wang
, “
A new modified Adomian decomposition method and its multistage form for solving nonlinear boundary value problems with Robin boundary conditions
,”
Appl. Math. Modell.
37
,
8687
8708
(
2013
).
31.
A.
Ben-Menahem
and
S. J.
Singh
,
Seismic Waves and Sources
(
Springer-Verlag
,
New York, NY
,
1981
), pp.
44
88
.
32.
W.
Gellert
,
H.
Küstner
,
M.
Hellwich
, and
H.
Kästner
,
The VNR Concise Encyclopedia of Mathematics
(
Van Nostrand Reinhold Company
,
New York, NY
,
1975
), pp.
483
484
.
33.
J.-S.
Duan
and
R.
Rach
, “
New higher-order numerical one-step methods based on the Adomian and modified decomposition methods
,”
Appl. Math. Comput.
218
,
2810
2828
(
2011
).
34.
B.
Ermentrout
, “
Type I membranes, phase resetting curves, and synchrony
,”
Neural Comput.
8
,
979
1001
(
1996
).
35.
C.
Börgers
and
N.
Kopell
, “
Synchronization in networks of excitatory and inhibitory neurons with sparse, random connectivity
,”
Neural Comput.
15
,
509
538
(
2003
).
36.
B. S.
Gutkin
and
G. B.
Ermentrout
, “
Dynamics of membrane excitability determine interspike interval variability: A link between spike generation mechanisms and cortical spike train statistics
,”
Neural Comput.
10
,
1047
1065
(
1998
).
37.
J.
Spanier
and
K. B.
Oldham
,
An Atlas of Functions
(
Hemisphere Publishing Corporation
,
New York, NY
,
1987
), pp.
319
341
.
38.
R. W.
Hamming
,
Numerical Methods for Scientists and Engineers
(
Dover Publications, Inc.
,
Mineola, NY
,
1986
), pp.
189
191
.
39.
R.
FitzHugh
, “
Mathematical models of excitation and propagation in nerve
,” in
Biological Engineering
, edited by
H. P.
Schwan
(
McGraw-Hill Book Company
,
New York, NY
,
1969
), pp.
1
85
.
40.
M. V.
Demina
and
N. A.
Kudryashov
, “
Meromorphic solutions in the FitzHugh–Nagumo model
,”
Appl. Math. Lett.
82
,
18
23
(
2018
).
41.
J. L.
Hindmarsh
and
R. M.
Rose
, “
A model of the nerve impulse using two first-order differential equations
,”
Nature
296
,
162
164
(
1982
).
42.
F.
Pitolli
, “
A collocation method for the numerical solution of nonlinear fractional dynamical systems
,”
Algorithms
12
,
156
(
2019
).
43.
L. L.
Schumaker
,
Spline Functions: Basic Theory
(
John Wiley & Sons
,
New York
,
1981
).
44.
P. M.
Prenter
,
Splines and Variational Methods
(
John Wiley & Sons
,
New York
,
1975
).
45.
C.
deBoor
,
A Practical Guide to Splines
, revised ed. (
Springer-Verlag
,
New York
,
2001
).
46.
F.
Pitolli
, “
Optimal B-spline bases for the numerical solution of fractional differential problems
,”
Axioms
7
,
46
(
2018
).
47.
M.
Dehghan
,
A.
Hamidi
, and
M.
Shakourifar
, “
The solution of coupled Burgers’ equations using Adomian–Pade technique
,”
Appl. Math. Comput.
189
,
1034
1047
(
2007
).
48.
S. K.
Asok
, “
An application of the Adomian decomposition method to the transient behavior of a model biochemical reaction
,”
J. Math. Anal. Appl.
132
,
232
245
(
1988
).
49.
G.
Adomian
and
R.
Rach
, “
Equality of partial solutions in the decomposition method for linear or nonlinear partial differential equations
,”
Comput. Math. Appl.
19
,
9
12
(
1990
).
50.
M.
Plaksin
,
S.
Shoham
, and
E.
Kimmel
, “
Intramembrane cavitation as a predictive bio-piezoelectric mechanism for ultrasonic brain stimulation
,”
Phys. Rev. X
4
,
011004
(
2014
).
51.
J.-S.
Duan
,
T.
Chaolu
,
R.
Rach
, and
L.
Lu
, “
The Adomian decomposition method with convergence acceleration techniques for nonlinear fractional differential equations
,”
Comput. Math. Appl.
66
,
728
736
(
2013
).
52.
A.
Bueno-Orovio
,
D.
Kay
,
V.
Grau
,
B.
Rodriguez
, and
K.
Burrage
, “
Fractional diffusion models of cardiac electrical propagation: Role of structural heterogeneity in dispersion of repolarization
,”
J. R. Soc., Interface
11
,
20140352
(
2014
).