The reconstruction of electrical excitation patterns through the unobserved depth of the tissue is essential to realizing the potential of computational models in cardiac medicine. We have utilized experimental optical-mapping recordings of cardiac electrical excitation on the epicardial and endocardial surfaces of a canine ventricle as observations directing a local ensemble transform Kalman filter data assimilation scheme. We demonstrate that the inclusion of explicit information about the stimulation protocol can marginally improve the confidence of the ensemble reconstruction and the reliability of the assimilation over time. Likewise, we consider the efficacy of stochastic modeling additions to the assimilation scheme in the context of experimentally derived observation sets. Approximation error is addressed at both the observation and modeling stages through the uncertainty of observations and the specification of the model used in the assimilation ensemble. We find that perturbative modifications to the observations have marginal to deleterious effects on the accuracy and robustness of the state reconstruction. Furthermore, we find that incorporating additional information from the observations into the model itself (in the case of stimulus and stochastic currents) has a marginal improvement on the reconstruction accuracy over a fully autonomous model, while complicating the model itself and thus introducing potential for new types of model errors. That the inclusion of explicit modeling information has negligible to negative effects on the reconstruction implies the need for new avenues for optimization of data assimilation schemes applied to cardiac electrical excitation.

Typical cardiac electrophysiology experiments record excitation data with optical cameras that are tuned to measure fluorescence of voltage-sensitive dyes and are only able to observe excitation patterns on the surfaces of tissues. We know from theoretical and computational evidence that the details of excitation patterns through the depth of tissues are important for prediction and control, while those observable on the surface are a very small portion of the overall excitation structure. These patterns may be pro-arrhythmic due to structures that are unobservable near the surfaces of the tissue, which can pin organizing features of the dynamics and thus simplify the excitation patterns. As the interiors of the tissue are inaccessible using traditional experimental means, we turn to reconstructing the interior using computational methods—specifically, data assimilation.

Experiments in cardiac excitation dynamics provide only a limited window into a very complex system. Micro-electrode recordings trade spatial resolution for temporal resolution, electrocardiograms eschew cellular detail for organ-level holism, and optical-mapping experiments focus on the electrical excitations near the exposed surfaces of a tissue whose internal dynamics may be much more complex. Data assimilation (DA) promises to address the difficulties of the latter; with a reliable dynamical model and sufficiently precise observations of the experimental state, the techniques of data assimilation aim to reconstruct the unobserved interior dynamics of the tissue, subject to the observations at the surface(s). Importantly, the techniques of data assimilation are robust with respect to uncertainty as the uncertainty of the truth state and of the observation values, as well as of the specificity of model predictions, are all ever-present when working with real experimental data.

Data assimilation has co-developed with the weather and climate forecasting communities for whom the tools are often specialized and well-tuned. Ensemble methods, in particular, are relative newcomers to the application1 but are favored due to their simpler relationship to nonlinear dynamical models. Numerous studies have developed new data assimilation methods and compared them to existing approaches;2–7 developed innovations for improving the performance8,9 and robustness of results from data assimilation;10 defined fair scoring rules for the rigorous definition of “improving” data assimilation results;11–14 and even considered such subtleties as the connection between the observations, assimilation filter, and the model.15–18 

Significant efforts in the assimilation of cardiac excitation patterns in three-dimensional tissue have focused on the foundational aspects of the problem—data distributions, uncertainty quantification, model error, and robustness with respect to noise19–21—in the context of synthetic observations generated by a model, rather than from an experiment. In previous work,22 we have found a sensitivity to some parameter uncertainties in the accuracy of local ensemble transform Kalman filter (LETKF) reconstructions, as well as improved robustness and accuracy with stochastic formulations of the underlying model. This work considers the efficacy of our reconstruction schemes in the case of experimental data typical of the field: periodic pacing of a tissue, with excitations captured by optical mapping on the epicardial and endocardial surfaces of the tissue.

Throughout this paper, we will be using a dataset previously published in experimental studies.23–26 The dataset corresponds to activation patterns subject to a localized, periodic, current stimulation in canine ventricular tissues. We will investigate the use of both autonomous and non-autonomous models for the reconstruction of these experimental observations, and the effect of stochastic model interventions on the accuracy of the reconstructions. Finally, we investigate the effect of additional synthetic observations generated in the interior of the tissue on the robustness of the assimilation and accuracy of the reconstruction of experimental dynamics.

In this section, we outline the mathematical details of the dynamical model, the determination of reasonable model parameters, the generation of observations from the experimental data, and, finally, the local ensemble transform Kalman filter (LETKF) used in the assimilation of the observations for the reconstruction of the experimental dynamics.

We use the Fenton–Karma three-variable model27 for the electrophysiology, with anisotropic diffusion controlled by a fiber orientation that rotates within the x , y-plane with depth, denoted by angle with respect to the x-axis, θ ( z ). The evolution equations are
(1)
(2)
(3)
in conjunction with the no-flux boundary conditions, n ^ D ( x ) u = 0. The currents and explicit relaxation scale for v are given by
(4)
(5)
(6)
(7)
save for the explicit stimulation current I stim ( t , x ), which is zero identically for autonomous dynamics and is specified in particular assimilation experiments (see Sec. III A 3). The diffusive spatial coupling is implemented using a finite-difference scheme that follows in the original model description.27 The diffusive coupling of the tissue is subject to fiber orientation that rotates through the tissue depth S, θ ( z ) = θ 0 + δ θ ( 1 / 2 + z / S ), where 0 z S, θ 0, and δ θ are specified throughout this paper, with D = 0.001 cm 2 / ms and D = 0.0002 cm 2 / ms, for an (diffusion) anisotropy ratio D / D = 5, which is relatively small compared to some ratios measured in the cardiac tissue.28,29

We use the model parameters defined in Ref. 26 for this model and experimental dataset. These parameters are quite different from other parameter sets used with this model for physically relevant dynamics; cf. Ref. 27. However, for the present investigation, we have opted to consider the smallest model errors to better determine the effect of changing the model and observation properties.

In all experiments, we use a Rush–Larson time-stepping scheme for the gating variables v and w combined with a forward-Euler method for the transmembrane potential u.30 When we incorporate stochastic effects (cf. Sec. III A 4), we adapt the forward-Euler method for the transmembrane potential u to an Euler–Maruyama method. In this work, we do not incorporate stochastic terms in the dynamics of the gating variables, as previous work demonstrated that similar ensemble inflation effectiveness can be achieved with only stochastic currents for this model,22 which makes the implementation simpler and more performant.

The experimental data comprises optical recordings of fluorescence (cf., Refs. 23 and 31 for details of the original preparation) corresponding to transmembrane potential, which have been smoothed and denoised. The observations ( y o) are a vector of tuples ( y i o) containing a tag designating the observed field variable, coordinates in the domain ( [ x i , y i , z i ]), the observed field variable at these coordinates [e.g., u i = u ( x i , y i , z i )], and the uncertainty of the observation value ( η i) for each observation. The observed field values are also normalized such that 0 u i 1. As the observations only cover the transmembrane potential u, we may use y i o and u i interchangeably to denote the values of the observations. The recordings cover response excitations for canine ventricle tissues subject to a periodic stimulus applied from a single position, across a large range of stimulus periods or basic cycle lengths (BCLs).

For the purposes of this study, we have focused on a dataset with stimulation period of 118 ± 2 ms, as the patterns generated with this short stimulation period are the most irregular, spatially and temporally, and thus present the most challenging reconstruction task. An example of the recordings from this dataset is shown in Fig. 1, detailing (a) the epicardial and (b) the endocardial surface recordings, along with (c) temporal traces of the recording at the center-points of the tissue surfaces. The figure likewise shows an inset 3 cm square used for the generation of the observation sets for the assimilation experiments, as well as the boundaries for the epicardial and endocardial surfaces. The recording features a very fast wavefront propagating from figure-upper-left to figure-lower-right, nearly simultaneously on both surfaces [c.f. Fig. 1(c)] from a stimulus current applied to the base of the endocardium.23 

FIG. 1.

Example of the experimental data used for the reconstruction experiments. (a) Epicardial snapshot and (b) endocardial snapshot at time t = 5734 ms, with (c) time traces of the recording at positions labeled by the + (epicardial) and × (endocardial) markers. The thin black curve in (a) and (b) designates the boundary of the opposing surface recording, and the thick black curve designates the boundary of their mutual intersection. The inset dashed square has side-length 3 cm, aligned to the horizontal and vertical axes and shows the region selected to extract observations for assimilation.

FIG. 1.

Example of the experimental data used for the reconstruction experiments. (a) Epicardial snapshot and (b) endocardial snapshot at time t = 5734 ms, with (c) time traces of the recording at positions labeled by the + (epicardial) and × (endocardial) markers. The thin black curve in (a) and (b) designates the boundary of the opposing surface recording, and the thick black curve designates the boundary of their mutual intersection. The inset dashed square has side-length 3 cm, aligned to the horizontal and vertical axes and shows the region selected to extract observations for assimilation.

Close modal

Over time, these data display the alternans typical of cardiac tissue excitation paced faster than the normal rhythm, namely, variation in the duration and amplitude of the action potential. We generate observations from the dataset by constructing a cubic interpolant over space with linear interpolation over time, for both the epicardial and endocardial datasets, and sampling into this interpolant for pre-specified coordinates corresponding to the inset domain shown as dashed squares in Figs. 1(a) and 1(b). In the case of synthesized observations generated for the interior of the domain (cf. Sec. III B 1), we linearly interpolate through the depth between the values of the observations at the projected positions on the surface to determine the value of the synthesized observation in the interior. This ensures that we are propagating information from the surfaces into the mid-depth.

Each y i o is likewise assigned an uncertainty η i, which we identify as η _ = 0.05 or 5 % of the action potential amplitude range by default. The value of η _ = 0.05 was estimated from the full-width at half-maximum of the distribution of u o ( t ) u o ( t τ ) for the dataset, with τ the assimilation interval. See Fig. S2 for a detailed motivation of this computation. In the case of synthetic observations interpolated into the mid-depth of the tissue (cf. Sec. III B 1), we assign a larger uncertainty, η i = η ¯ = 0.5, to account for the perturbative effect of these synthetic observations on the assimilated state since the diagonal elements of the observation covariance are related to the inverse, R i i = η i 1; i.e., they contain no new information from the experimental data other than a highly uncertain suggestion that the mid-plane is excited or unexcited at a particular place and time. In the case of encoding uncertainty about the precise form of the wavefronts (cf. Sec. III B 2), we perform a similar modification of the observation uncertainty using a nonlinear function of the observation field value η ( y i o ), which is bounded by [ η _ , η ¯ ] and peaks at η ¯ = 0.5 for y i o = 0.5.

The ensemble transform Kalman filter (ETKF) finds best-fit linear combinations of the prior states to match a set of observations in a quadratic sense. The LETKF32 transforms an ensemble representing the prior covariance and mean in accordance with the local observations of the state to affect an ensemble representing the posterior covariance and mean. We consider the classical Kalman filter optimization functional33 
(8)
where the prior ensemble of size M is the sum of the ensemble deviation matrix ( X b) and mean ( x ¯ b), x b = X b + x ¯ b [ 1 , ] , similarly for the decomposition of the posterior ensemble, x a. The covariance of the prior ensemble is described by the matrix P b = ( M 1 ) 1 X b ( X b ) . The observations of the physical state are collected in y o R O, while the observation operator H : R N R O transforms from the state space to the observation space, and matrix R R O × O describes the covariance of the observations. The observation operator maps the state vector to the observations, with additive noise per observation with variance η, according to the uncertainty. The minimizing solution to (8) is denoted by the analysis ensemble, x ¯ a, which best approximates the truth state as observed by the data y o and constituted from the background ensemble x b.

The ETKF applies the analysis update in the span of the background ensemble states. The analysis ensemble covariance is expressed as P ~ a = [ ( M 1 ) I / ρ + ( Y b ) R 1 ( Y b ) ] 1, where P ~ a R M × M and the analysis ensemble mean x ¯ a = x ¯ b + X b w ¯ a. The scalar factor ρ 1 is the (uniform, constant) multiplicative covariance inflation factor. The mean weights are determined by x ¯ a = P ~ a ( Y b ) R 1 ( y o y ¯ b ). The weights of the analysis ensemble mean deviations are similarly expressed in the basis of the background mean deviations, W a = [ ( M 1 ) P ~ a ] 1 / 2 and x a = x ¯ b + X b ( w ¯ a + W a ). This deviates from the error subspace transform Kalman filter (ESTKF),2 which performs the transformation in the ( M 1)-dimensional subspace corresponding to the span of X b.

The LETKF extends this analysis update in the space of the background ensemble locally—in the sense of observation influence and covariance. The inclusion of observation and covariance localization applies the preceding analysis to each element of the state vector, where the inclusion of observations for a particular state vector element is determined by the localization procedure. In this work, we use an isotropic localization weighting controlled by multiplying the background covariance matrix P b by a decaying fifth-order function, which goes to zero at a distance of 2 10 / 3 σ o, where we have retained σ o = 6 ( 21.9 spatial units, or 0.33 cm) for consistency with our earlier work. We refer the reader to Refs. 20, 22, and 32 for further mathematical details of the implementation of the localization procedure.

Finally, in lieu of an optimized multiplicative and additive inflation scheme, we used multiplicative inflation only. The LETKF utilizes multiplicative inflation of the analysis ensemble deviations ( X a = x a x ¯ a) with a fixed value of ρ = 1.05. We did not utilize additive inflation in this work, despite previous results that showed it to be effective22 in the expectation that our stochastic approach would similarly affect the ensemble while requiring substantially less disk space.

We report results for several data-assimilation experiments. In all experiments, we use an observation dataset derived from optical-mapping recordings of periodic stimulation of a canine ventricle and a fixed model integration time (equivalently, assimilation interval) of T a = 2 ms based on the data recording frequency. The assimilation outputs are the background ( x b) and analysis ensembles ( x a), their respective means x ¯ = M 1 m = 1 M x m , where m = 1 , , M indexes the ensemble and their ensemble spreads,
(9)
which is the variance of the ensemble state vectors. We have used here the vector-of-vectors notation for the ensembles, x = [ x 1 , , x M ], but we may equivalently make the spatial ( i j k), variable ( l), and ensemble ( m) indices explicit in a five-index array, x = [ x i j k l m ], where the indices ( i , j , k , l ) are linearized for the vector form.
We report the root-mean-square-error ( RMSE ) of the reconstruction by the point-wise error of the reconstructed state by comparing the ensemble mean to the observations y o ( t ),
(10)
where y i o ( t ) is the ith observation and the index ι ( i ) maps the corresponding observation index i to the corresponding element of the state vector. Thus, the RMSE summation is restricted in our experiments to the surfaces of the domain. Depth information is available from SPRD ; by reducing over indices of the state vector corresponding to the length and width, we find a measure of the spread through the depth of the domain for each state variable,
(11)
where we have implicitly reshaped SPRD from a vector to a four-dimensional array with explicit spatial ( i , j , k) and variable ( l) indices. In the experiments, this scalar field is computed for the background ensemble x b and u field elements of the SPRD b over time and reported as SPRD u b ( t , z ).
The definition of (10) restricts its evaluation to the subset of the domain where there are observations, which tells us nothing about the consistency of the interior reconstructions. Canonical reporting of reconstruction consistency for unknown states is handled by proper scoring rules, like the continuous ranked probability score ( CRPS ) of the ensemble, which can be formulated solely in terms of the ensemble states without (necessary) recourse to observations or a “truth” state.34 We refer to the kernel representation of the adjusted CRPS ,35 
(12)
with y = y o or y = x a as CRPS o or CRPS a, respectively, which has desirable properties for exchangeable ensembles with i.i.d. members irrespective of ensemble size M. In the above, the norm ( ) corresponds to the 1-norm over the indices of the vector y. When y references the observations y o, then the summation is over the observations (and thus surface-restricted); when y references the analysis ensemble mean x ¯ a, the sum measures the consistency of the entire background ensemble with respect to the best approximation of the observations according to the LETKF, x a, over the entire state. We encourage the reader to peruse the supplementary materials for a closer look at the representative dynamics of the reconstruction over time.

We have run several experiments for the reconstruction of cardiac excitation patterns. In the simplest case, we compute the baseline accuracy of the reconstruction procedure by incorporating information about the stimulus pattern in the observations into the model dynamics, without any assimilation procedure. Additionally, we consider a strong assimilation process without this stimulus information to assess the importance of modeling the non-autonomous forcing inherent to the observation data for the reconstruction accuracy. Finally, we incorporate the stimulus forcing into the model dynamics in concert with a strong assimilation procedure; this approach maximizes the information supplied to the assimilation and provides a clear measure of the importance of non-autonomous model information to the reconstruction reliability.

1. Free-run

To begin, we ask to what degree the assimilation procedure may be responsible for the reconstruction of the state compared with the averaging procedure of the ensemble and the synchronization of the ensemble states with the stimulus cadence. We solely assess the uncontrolled dynamics of the ensemble members with the stimulus forcing (as, in the absence of stimulus forcing, the ensemble member states all return to quiescence) for the reconstruction problem. To this end, we report the reconstruction results with the assimilation update excised- –equivalently, the observations are not permitted to affect any of the state variables—where the model is stimulated periodically to match the stimulus observed in the observations.

In the free-run setting, we perform no assimilation by explicitly filtering all observations from the analysis computation, so that the LETKF reduces to a scaling-permutation operator as the ensemble index is not guaranteed to be preserved, and x ¯ a x ¯ b. The free-run experiment serves as an effective method of distinguishing the improvement due to the LETKF and the stimulus model in the reconstruction over long assimilation times.

Additionally, to avoid the trivial solution, we include a stimulus current forcing in the dynamical model for the free-run experiment. Thus, in this setting, we are testing the accuracy of reproducing the observations with only the model and stimulus information, without recourse to the assimilation method. We constructed an explicit stimulus current for the ensemble model using the observation data. We determined the period T, temporal offset t 0, and centroid position x 0 for the stimulus current assuming a model of the form,
(13)
which affects the dynamics of u ( t , x ) by entering the evolution equations in addition to the autonomous currents I s o, I f i, and I s i present in the dynamical model. The stimulus current is temporally modulated by the “top-hat” function Π ( t ), which turns the stimulus current “on” for a short interval τ on ( Π ( t ) = 1) and otherwise leaves it “off,” I stim = 0 ( Π ( t ) = 0).

The experiment that produced the dataset used in this study uses a periodic biphasic stimulation, applied outside of the assimilated domain at the base of the endocardium. The stimulus current model, Eq. (13), is monophasic and independent of depth, which is not the same as the experimental stimulus current. Rather, the our stimulus current model approximates the effect of the experimental stimulus on the observations within the assimilated domain; i.e., it is a data-driven model, and only approximates the observed effects. As we only observe the effect of the experimental current some distance from its direct point of application, we effectively filter the experimental stimulus current through the tissue observations to construct our stimulus current model. Throughout, we set I 0 = 0.1 ms 1, with the duration of the stimulus set to τ on = 10 ms, and set the spatial scale of the stimulus current to ( κ / h x ) 2 = 5 × 10 4 to match that observed in the observation data. The period between successive stimuli is 118 ms, likewise matching the observation data. Tests were performed to assess the impact of the stimulus current parameters on the state, which were found to tall into two classes: (a) sub-critical (non-excitatory) and (b) super-critical (excitatory) stimuli, which showed no meaningful variation for physically realistic values within each class. This critical excitation phenomenon is well-understood in the one-dimensional case.36–38 

In Fig. 2, we show the CRPS a , o ( t ), the RMSE b ( t ), and the spread through the depth of the tissue SPRD u b ( t , z ) for the free-run experiment using the Barone et al. parameter set. The CRPS o ( t ) and CRPS a ( t ) indicate significant disagreement between the background ensemble and the observations, as well as in comparison to the analysis ensemble mean. The RMSE b ( t ) indicates that the ensemble does not accurately reconstruct the surface dynamics in the absence of assimilation, as expected. Furthermore, the variability of RMSE b ( t ) decreases over time as the forcing of the stimulus current synchronizes the large-scale features of the ensemble states as the information about the ensemble initialization is slowly lost. Likewise, the SPRD u b ( t , z ) is nearly invariant with respect to z, indicating that we correctly restore the z-invariance symmetry of the ensemble states when the observations are not present in the reconstruction procedure.

FIG. 2.

Free-run experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

FIG. 2.

Free-run experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

Close modal

These results serve as a baseline for comparing with the experiments to follow. For the free-run experiment, we have used our unmodified observation set and standard ensemble initialization, with the stimulus model and absent any assimilation. The ensemble means therefore do not correspond to a reconstruction, as there are no guiding observations, and so we do not show the mean fields.

2. Autonomous model

The first experiment uses an autonomous, deterministic, cardiac excitation model for the ensemble members in the assimilation of these observations to consider the significance of model error arising from model parameters which may or may not effectively reproduce the observed action potentials. Instead of adapting the model to every stimulus protocol, it is dramatically simpler to use the properties of the Kalman filter to match the periodicity of the stimulation without explicit recourse to a modeled stimulus pattern. This approach has the obvious limitation that we have rejected a potential source of additional information and introduced latency into the assimilation, as the ensemble can only react to the next stimulus after it has already appeared in the observations. Given this restriction, this experiment shows the worst-case, “low-information,” estimate of the state of the system using the LETKF assimilation infrastructure. We refer to this experiment as “autonomous.”

Figure 3 shows the (a) CRPS a , o ( t ), (b) RMSE b ( t ), and (c) SPRD u b ( t , z ) metrics over time for the reconstruction using the Fenton–Karma model with the Barone et al. parameter set. The assimilation does a very good job of reconstructing the surfaces of the state to match the observations after the initial transient, which arises from the phase mismatch of the initial ensemble and the observations, producing high initial error. The CRPS o ( t ) is strongly correlated with the RMSE b ( t ), as expected, and CRPS a ( t ) remains small over long time scales, growing slowly and oscillating due to the underlying excitation period. This behavior is due to the lag inherent to this assimilation experiment, due to the information of the next excitation needing to propagate through the full LETKF before affecting the background ensemble. The dominant contribution to the RMSE b ( t ) corresponds to a peak occurring during the full excitation of the domain, with fast relaxation of the error as the excitation amplitude dissipates. One interpretation of this contribution is that on the surfaces, the LETKF struggles with the uncertainty of the wavefront and deviation of the recorded wavefront shape from those generated by the nonlinear model for the background ensemble (the model lags the observed wavefront, by construction) but manages to reproduce the waveback accurately. Indeed, the innovation of the LETKF is large during the propagation of the wavefront—the LETKF is correcting a slow CV in the model to match the excitation pattern observed in the data—and relatively smaller during the propagation of the waveback. The spread for this parameter set begins small and quickly grows in the interior, representing the dominant contribution of the uncertainty of the state of the system, as expected. As the information about the excitation propagates from the surfaces toward the interior of the domain, it is clear that the LETKF is unable to constrain the interior of the solution using only the surface observations. That said, the maximum of the spread at z / d = 1 / 2 saturates after t 250 ms, indicating that this time period reflects the true uncertainty of the interior.

FIG. 3.

Autonomous experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

FIG. 3.

Autonomous experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

Close modal

On the surfaces, after an excitation, the next action potential (AP) is driven by the assimilation perturbing the state in an appropriate region to cause a new propagating wave that matches those in the observations, as expected. Additionally, as the different parameter sets generate APs with different wave speeds ( c D / τ d), the assimilation continually corrects the propagation directly ahead of the wavefront. When the wave is slightly too slow, this correction is effective as the region preceding the wavefront is expected to be excitable; thus, small perturbations ahead of the wave create a new front that connects to the existing lagged wavefront before the next assimilation time. When the wave is slightly too fast, this correction is ineffective as reducing the value of u by a small amount on the wavefront does not significantly hinder the propagation—doing so requires a larger, quenching, perturbation related to the gap between the middle unstable nullcline of I f i and the rest state. This asymmetry in effective control is apparent on the surfaces because of the presence of observations, and likewise means uncontrolled wave speed errors propagate in the interior.

The phenomenology of the assimilation dynamics is exemplified by Fig. 4, which shows z-slices of the background ensemble mean x ¯ b at fixed depths for three sample times from the autonomous experiment. Near the beginning of a new excitation in the observations, cf. Fig. 4(a), the ensemble mean is a very good approximation of the distribution of the excitation at the surfaces ( z / d = 0.0 , 1.0) while the interior slices are not reasonable approximations of an interpolating function through the depth. Rather, the interior is too uncertain to match the propagation of a wave across the domain, which we expect to pass uniformly in z, in time with the surfaces.

FIG. 4.

Slices of the u-field of the observations ( u o) and the background ensemble mean ( u ¯ b) at depths z / d = 0.0 , 0.2 , 0.5 , 0.8 , 1.0 and times t = 700 , 708 , 716 , 756 , 772 ms for (a)–(e), respectively.

FIG. 4.

Slices of the u-field of the observations ( u o) and the background ensemble mean ( u ¯ b) at depths z / d = 0.0 , 0.2 , 0.5 , 0.8 , 1.0 and times t = 700 , 708 , 716 , 756 , 772 ms for (a)–(e), respectively.

Close modal

Furthermore, these slices highlight a significant contribution to the model error: the timing of the front in the ensemble is lagged compared to the observation data. This behavior is due to the interaction of three phenomena related to the timing of the periodicity of the observations and how these synchronize the ensemble dynamics.

The first and most significant is the slower conduction velocity of the model than the observation data. Despite being manually tuned to the data,26 the parameter set is unable to adequately match the observed front speed without a significant overestimate of the tissue conductivity (cf. Ref. 26, their σ = 9 × D and σ = 15 × D )39 and thus any timing of the excitation wave in the ensemble with the observations becomes a significant, ongoing, correction for the LETKF. Previous in silico studies have found the LETKF to adequately address a variety of systemic model errors in 3D, including fiber angle rotation, tissue conductivity, and the explicit time scales of the model.20–22 Ahead of the wavefront, the LETKF can readily excite the leading edge of quiescent tissue in the ensemble member states, speeding up the front to match the excitation propagation in the observations, provided there are ensemble members that are already excited in that leading region. Due to the slower conduction velocity, this scenario is rarely the case as the ensemble members synchronize their phase with the observations due to the driving of the LETKF.

The second contributor to the model error is the lag inherent to the propagation into the interior of the domain for the autonomous model. The interior dynamics are lagged by at least τ c d / 2, the time it takes information on the surfaces to propagate to the interior for a transverse wave speed of c . Thus, even when the surfaces correctly estimate the distribution of the excitation in the observations, the interior distribution is dominated by the dynamics from several assimilation steps ago. These discrepancies are systematic errors in the reconstruction that are largely invisible to the RMSE b metric, as it relies on the surface observations, but should affect CRPS a if the ensemble members differ in the interior, and likewise SPRD u b ( t , z ).

The third contributor is the lag associated with the autonomous model dynamics due to the lack of stimulus information. During quiescence, the next excitation wave is observed at time t; these observations are used in the LETKF and their influence appears in the analysis ensemble states at time t. These analysis ensemble states form new initial conditions for the dynamical model, and the effect of this perturbation due to the initial observations appears in the next background ensemble, at t + T a, thereby introducing a lag between the background ensemble and the observations of no less than the assimilation interval time. If the threshold for excitation is not met for the analysis ensemble state, then the lag between the observed excitation stimulus and the resulting wave will be larger, which is typically the case as the initial observations of an excitation are, by definition, small, and likely sub-threshold.

Finally, we must note several features of the observations and reconstruction that will become relevant in later results. The observations smoothly interpolate from quiescent ( u ( t , x , y , z ) 0) to fully excited ( u ( t , x , y , z ) 1) across the wavefront, cf. Fig. 4(a). Such blunted fronts represent a configuration not achievable with the model at hand—this model exhibits a threshold, such that the wavefront is sharply defined; i.e., u ( t , x , y , z ) is very likely to be close to 0 or close to 1, and the probability that u ( t , x , y , z ) is near 0.5 along the wavefront is exceptionally small. The observations suggest that, at this particular time, u o has a significant probability of being close to 0.5. This tension between the spatially smooth data and the sharp features of the model forms a significant impediment to reliable reconstruction.

Additionally, the dynamics of the model with the chosen parameter set—despite being manually tuned for this dataset—tends to overshoot the action potential amplitude (APA) of the observations, cf. Figs. 4(a)4(c). This behavior is caused, first, by the analysis step of the LETKF over-correcting the ensemble states. The effect then persists due to the relatively short model integration interval ( T a = 2 ms), which is not sufficient for the state to relax onto the slow manifold. This effect may also be observed in the free-run experiment due to the relatively high frequency of the stimulus current (not shown). Notably, the effect of the high-frequency pacing stimulus current and the overshoot of the analysis step are not additive.

3. Explicit stimulus current modeling

This experiment considers the effect of including a non-autonomous, spatially localized, explicit stimulation current in the dynamics of the model that is tuned both spatially and temporally to qualitatively match the stimulation observed in the experimental dataset, and the potential for catastrophic synchronization of the ensemble in the presence of this explicit modeling choice. The stimulus current used is the same as that used in the free-run results (Sec. III A 1). We refer to this experiment as “stimulus.”

The results for the model are shown in Fig. 5, where the reconstruction is largely insensitive to the inclusion of the stimulus current. Several small changes appear in the RMSE b ( t ) compared to the autonomous results. The reconstruction error exhibits smaller peaks during the excitation phase, where the stimulus current triggers the excitation wave ahead of the lag associated with the observation-LETKF-model path in the autonomous experiment. After t = 1.5 s, drift in the timing of the stimuli leads to an early stimulus, which is corrected by the LETKF on the surfaces, effectively constraining the timing error. These early stimuli appear as small ( 0.1) increases in the RMSE b ( t ) from the baseline outside of the dominant contributions from the excitation wave in Fig. 5(a)—the short-lived effect on the surface error is also an indication that the reconstruction procedure is robust with respect to errors in the specification of the stimulus period, a prime concern due to the timing-related contributions covered in the autonomous reconstruction.

FIG. 5.

Stimulus experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for reconstruction using the Barone et al. parameter set.

FIG. 5.

Stimulus experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for reconstruction using the Barone et al. parameter set.

Close modal

In Fig. 6, we show the (a) RMSE b ( t ) for the autonomous, stimulus, and free-run experiments over time (left), and the marginal density plots of the surface errors (right) for reconstruction using Fenton–Karma and the Barone et al. parameter set. The improvement of the RMSE b over the free-run experiment is significant, but the surface errors for the autonomous and stimulus experiments are comparable, both overall and over time. The most significant correction of the RMSE b ( t ) for the stimulus results over the autonomous model is a shorter initial transient as the first excitation ( t 125 ms) reaches a lower amplitude error on the surfaces, and thus, the ensemble synchronizes to the true stimulus rhythm more quickly. The RMSE b makes it clear that the inclusion of the LETKF effectively lowers the expectation value of the surface error overall. For both the autonomous and stimulus experiments, the surface error is nearly identical—it appears that the inclusion of stimulus information into the model dynamics has a minimal impact on the surface reconstruction error over long times, save for the minor advantages noted above. The depth-averaged SPRD u b ( t ) for the free-run experiment is larger than in either the autonomous experiment or stimulus, reflecting the low-confidence of the reconstruction in the absence of the LETKF. Indeed, the depth-averaged SPRD u b ( t ) is nearly identical for the autonomous and stimulus results, with a subtle bias in the latter toward larger variation over time, cf. Fig. 6(b).

FIG. 6.

(a) RMSE b ( t ) and (b) depth-average SPRD u b ( t ) for the free-run, autonomous, and stimulus experiments, with the distribution of (right, top) surface errors and (right, bottom) ensemble spread over time.

FIG. 6.

(a) RMSE b ( t ) and (b) depth-average SPRD u b ( t ) for the free-run, autonomous, and stimulus experiments, with the distribution of (right, top) surface errors and (right, bottom) ensemble spread over time.

Close modal

In addition to the reconstruction error of the field, it is worth considering the error in physiologically relevant quantities, which are derived from the state, e.g., those which inherit from the structure of the action potential, including the action potential duration (APD) and amplitude (APA). We report the temporal trace of the autonomous reconstruction for a randomly selected position in the observations in Fig. 7(a). After an initial transient, which lasts less than two BCL ( < 250 ms), the reconstruction at this point is excellent, with minor errors appearing in the apex of the action potential and, thus, contributing to a poor estimate of the true APA, cf. Fig. 7(b). The distribution of APD between the observation and reconstruction traces matches exceptionally well as expected because the predominant error in the reconstruction appears far from the threshold value ( u thr = 0.1). Similar calculations for all LETKF-driven reconstructions are included in the supplementary material, cf. Fig. S1.

FIG. 7.

(a) Temporal trace for the autonomous experiment and associated observations, sampled on the epicardium ( z / d = 0) at ( x , y ) = ( 0.825 , 0.42 ) cm and (b) corresponding action potential durations (APD) and amplitudes (APA) ( u thr = 0.1).

FIG. 7.

(a) Temporal trace for the autonomous experiment and associated observations, sampled on the epicardium ( z / d = 0) at ( x , y ) = ( 0.825 , 0.42 ) cm and (b) corresponding action potential durations (APD) and amplitudes (APA) ( u thr = 0.1).

Close modal

4. Stochastic current model effects

This experiment incorporates stochasticity into the model to investigate the efficacy on the accuracy of state reconstruction in the context of uncontrolled model errors. We have previously introduced these stochastic model effects for the assimilation of observations generated from state sequences generated by a model, subject to model uncertainty.22 These techniques improved the reconstruction of the state over LETKF in concert with a deterministic ensemble model, occasionally by significant margins; their introduction for assimilating experimental observations may help recover the state dynamics when the model error is due to the model being unable to perfectly reproduce experimental data. For this work, we will introduce a stochastic current (SDE- u) and stochastically selected time-scale parameters (SMP- τ) into the model. The SDE- u formulation introduces a new stochastic current I sto in the role of I stim in the dynamics of the voltage variable u in (1),
(14)
while the SMP- τ affects the selection of all the explicit time-scales of the model,
(15)
where τ ¯ is the baseline parameter value. We refer to this experiment as “stochastic.” We consider fixed standard deviations ( σ u = 0.04 and σ p = 0.04) for both stochastic effects.

In Fig. 8(a), we show the surface errors RMSE b ( t ) of the reconstruction for the Fenton–Karma model with Barone et al. parameter set, with the current-based SDE and SMP- τ stochastic effects. For these relatively low-amplitude stochastic effects, we find very minor changes to the surface reconstruction error—an effective smoothing of the peaks of RMSE b ( t ) corresponding to the excitation wavefront and waveback, which is expected from a uniformly stochastic effect. Analogously with the stimulus result, the transient in the RMSE b ( t ) is slightly worsened by the inclusion of the stochastic effects; the excitation near t 125 ms exhibits a larger peak error.

FIG. 8.

Stochastic experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

FIG. 8.

Stochastic experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

Close modal

In the previous work, we showed that the stochastic additions were capable of permitting the ensemble to better approximate the local degrees of freedom of the true dynamics by effectively inflating the spread for a given ensemble size. In this experiment, we find no such effect. We find that the addition of the stochastic current accelerates the depth-averaged SPRD u b ( t ), leading to faster average spread growth over the initial phase, before switching to a conservative effect over the long-term reconstruction, suppressing short-lived variations in the spread. Indeed, what this behavior indicates is that the addition of stochastic processes to the model may enhance synchronization of the ensemble in some scenarios, lessening the ensemble spread or suppress total decoherence for the ensemble.

Comparing the surface errors RMSE b ( t ) corresponding to the autonomous (Fig. 3), stimulus (Fig. 5), and stochastic (Fig. 8) model experiments, we find that the inclusion of the stimulus current or stochastic effects has a small effect on the overall reconstruction error. We may interpret this as, at best, a marginal improvement in the accuracy of the surface reconstruction with the inclusion of the stochastic or stimulus current into our dynamical model. Alternatively, we may interpret the autonomous result as achieving reconstruction errors lower than expected and with confidence comparable to these more informed model approaches. Likewise, due to the slow drift of the timing of the stimulus current in the observations and model, we may be confident in the robustness of the reconstruction with respect to model errors related to stimulus timing.

If modifications to the model have only subtle effects on the reconstruction of the state dynamics, then perhaps the tuning of experimental observations will have more significant effect. In these experiments, we perturb the observations from the experimental data to better model the uncertainty associated with the physical processes involved in the recordings or to buttress the highly-sparse observations with some intermediate information based on the apparent temporal structure of the data. Likewise, this investigation presents an opportunity to ensure that our reconstruction is robust for the type of data available to the cardiac researcher.

1. Addition of synthetic observations

In this experiment, we explore the generation of mid-depth synthetic observations, in addition to those on the surfaces, for use in the assimilation process. The role of these synthesized observations is to gently constrain the ensemble spread in the interior of the tissue, which we have previously identified as a predominant source of error in the reconstruction of the state in this system.22 The synthesized interior observations are interpolated between the observation values from their projections onto the epicardial and endocardial surfaces. The synthetic observations are appended to the real observations, the former having fixed point-wise uncertainty estimates of η = 0.50 and the latter having fixed point-wise uncertainty estimates of η = 0.05. Different spatial distributions of mid-depth observations can have a significant effect on the robustness and accuracy of the reconstruction. A full arrangement of observations that sample the ( x , y )-grid in the same positions as on the surfaces led to immediate ensemble collapse due to strong contraction of the analysis step (not shown). We use a relatively sparse arrangement where 64 observations were distributed uniformly in an 8 × 8 grid, which did not collapse the ensemble.

We add these synthesized observations due to the spatiotemporal pattern of the underlying excitations, where the wave on the surfaces passes nearly uniformly over ( x , y ) and in time. We thus treat the observations on each surface as strongly coupled and thus amenable to interpolation through the depth, i.e., between the surface sets. We report the impact the 64 additional observations (a 0.7 % increase in the total number of observations) has on the reconstruction fidelity.

The results of the assimilation with the synthetic mid-depth observations are depicted in Fig. 9. Immediately, we note that while CRPS a ( t ) is not significantly affected, CRPS o ( t ) now peaks significantly higher than any other assimilation experiment, cf. Fig. 9(a). This finding immediately indicates a significant deviation from the observations but does not reveal whether this discrepancy is due to the interior, synthesized observations or the “true” surface observations. The contribution is distinguished by RMSE b ( t ), which likewise peaks significantly higher than any other assimilation experiment, cf. Fig. 9(b), and is computed over the surface observations only for consistency with the other reconstruction experiments. Notably, the RMSE b ( t ) may be decomposed into contributions from the surfaces and the synthetic observations in the interior; doing so would provide a direct estimate of the error of the reconstruction in the interior provided our interpolation Ansatz is correct and reasonable. However, we have no rigorous way to assess the interpolation Ansatz and so cannot assert the correctness of the reconstruction in the interior using this method. So, we conclude that the inclusion of the interior synthetic observations interpolated from the surface observations leads to significantly worse reconstructions on the surfaces, despite the high uncertainty of the synthetic observations.

FIG. 9.

Synthetic observation experiment results, depicting (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

FIG. 9.

Synthetic observation experiment results, depicting (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

Close modal

The resolution of this puzzle may appear from a consideration of the ensemble spread. While additional mid-depth observations are extremely good at restricting the uncertainty of the reconstruction in the mid-depth of the domain, cf. Fig. 9(c), as information from the surface observations can affect the interior layers of the solution, this effect is not always desirable. By constraining the interior, we limit the spread of states used in the reconstruction of the surfaces, as well. This now-restricted ensemble of states may not adequately cover the physical instabilities present in the observed surface dynamics, limiting the accuracy of the reconstruction for observations for which we are confident (those on the surface) for the sake of observations for which we are not (those in the interior). Tuning the uncertainty and distribution of the interior observations compared to the surface observations to critically constrain the ensemble thus presents an additional window for optimization, although it is beyond the scope of this work.

2. Observation uncertainty modeling for fronts

For this experiment, we seek to infuse the observational data with uncertainties based on our knowledge of the physical and numerical filtering of the experimental data and the dynamics of the model. To this end, we identify the slow dynamics of u near the rest state ( u 0) and in the excited state ( u 1), and note that while the model produces sharp wavefront features, the observation data are relatively smooth. In practice, this assigns an observation uncertainty that depends on the observation value, η i = η ( y i o ), and specifically on the value of u _ < u < u ¯, where u _ = 0.1 corresponds to a threshold marking the boundary of quiescence and u ¯ = 1.0 u _ marks the boundary of the fully excited state. The nonlinear uncertainty function takes the form of a scaled and truncated Gaussian,
(16)
where η _ = u _ / 2 = 0.05 and η ¯ = 0.50 are lower and upper bounds for the uncertainty, and k is chosen so that η ( 0.0 ) = η ( 1.0 ) = η _ and η ( ( u ¯ + u _ ) / 2 ) = η ¯. The simplest interpretation of the wavefront uncertainty maximum η ¯ is that anywhere along the wavefront, the smoothing of the observation data makes it unclear whether an ensemble member should be in a fully excited or quiescent state at the same point in time, i.e., | u ¯ b y i o | U ( η ¯ , + η ¯ ). In principle, this approach encourages the growth of the ensemble spread along the wavefront, so that the ensemble mean is a good approximation to the observation data while the individual ensemble members retain their sharp wavefront features. Tuning the precise distribution of uncertainty, whether through a different ( η ¯, η _) pairing or differently motivated nonlinear function η ( y o ) altogether, is beyond the scope of this work; in the present investigation, we seek to perturb the existing observation data to determine if the reconstruction accuracy may be improved.

In Fig. 10, we report the effects of the wavefront uncertainty experiment. We find that the additional uncertainty along the wavefronts and wavebacks in the observation data leads to a small, but consistent, increase in the RMSE b ( t ) near the end of the wavefront and likewise increases the surface error during the end of the waveback, introducing a set of smaller peaks in RMSE b ( t ) between the usual set associated with the end of the wavefront. As we are encouraging deviations in the ensemble members near the beginning and end of the excitation wave, this finding is not unexpected—exactly in the transition between excited and unexcited, the spread is increased slightly, and it makes the most significant relative contribution to the surface error where and when the error is already low. Additionally, as the observations are synchronizing forces for the ensemble subject to the inverse observation covariance weighting, R 1, a higher uncertainty for the observations should lead to less synchronization overall and higher ensemble spread. We observe faster initial growth of the SPRD u b ( t , z ) compared to the autonomous experiment, but the long-term maximum is not significantly altered.

FIG. 10.

Uncertain wavefront experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

FIG. 10.

Uncertain wavefront experiment results, depicting the (a) CRPS a ( t ) (line) and CRPS o ( t ) (band), (b) RMSE b ( t ) (line) plus and minus one standard deviation (band), and (c) SPRD b ( t , z ) (color) for the reconstruction using the Barone et al. parameter set.

Close modal

In Fig. 11, we show the RMSE b ( t ) and SPRD u b ( t ) for the autonomous, wavefront uncertainty, and synthesized observations experiments. Both the wavefront uncertainty and synthesized observations experiments lead to larger surface reconstruction errors ( RMSE b) overall compared to the autonomous experiment, though their relative contribution over time is highly non-uniform. While the autonomous experiment reconstruction manages to match the APD and DI of the observation data, the synthesized observation experiment reconstruction undergoes large deviations from accurate reconstructions, frequently lasting longer than the APD of the observations. Likewise, while the wavefront uncertainty experiment explicitly makes a trade-off between confidence about the precise position of the wavefronts for the robustness of higher ensemble spreads, this robustness never materializes in practice; the autonomous experiment spread is comparable to that from the wavefront uncertainty experiment.

FIG. 11.

(a) RMSE b ( t ) and (b) SPRD u b ( t ) for the autonomous, wave uncertainty, and synthetic observations experiments, with the distribution of (right, top) surface errors and (right, bottom) ensemble spread over time.

FIG. 11.

(a) RMSE b ( t ) and (b) SPRD u b ( t ) for the autonomous, wave uncertainty, and synthetic observations experiments, with the distribution of (right, top) surface errors and (right, bottom) ensemble spread over time.

Close modal

In all results, the unobserved interior layers that are not driven by the LETKF analysis ensemble update form the dominant contribution to the reconstruction error, as expected from previous results using surface observations generated from model runs.22 However, previous experiments focused on dynamics that are autonomous and (on the large scale) structured by topological features, i.e., the dynamics of scroll waves are organized around their filaments; in the present work, we have focused on experimental results that are necessarily non-autonomous and driven by external stimuli, as these conditions are endemic to experimental investigations of cardiac tissue excitation. In the present scenario, all the dynamics are driven by the observations and thus the LETKF update of the analysis ensemble, save the free-run experiment which corresponds to the absence of any observations, without such topological constraints on the dynamics.

Several considerations have an outsized impact on the accuracy and stability of the reconstruction for this dataset. The first is the model error due to, primarily, misparameterization. For example, when the refractory period is sufficiently long, then subsequent excitations generated by the LETKF develop into a conduction block due to the refractory behavior of the model. The blocked wavefronts then prevent accurate reconstruction in the next assimilation step. Additionally, the propagation speed of the wave is important to match the model parameterization; slow waves rely on the assimilation scheme to excite the tissue ahead of the wavefront to match the observations, while fast waves rely on the assimilation scheme to suppress the propagation of waves in the model to match the observed wavefront. That the asymmetry between the ignition and quenching problems has significance for the reliability of the state reconstruction is surprising, at first glance. However, this effect mirrors our understanding of critical transitions for excitable media: the ignition problem relies only on the local threshold response of the excitable medium, while the quenching problem is sensitive to the particular pattern of the excitation and is thus a more nonlinear process.

The parameterization of the model can also affect the dynamical range of the ensemble states, such that the analysis may drive the fields beyond the range of the observations, i.e., “overshoot.” In our present problem, the action potentials of the Fenton–Karma model are only weakly bounded to the same domain as the observations, i.e., 0 u 1, whereas the assimilation step utilizes a strict bound for the analysis states 0 u a 1.5. Due to the short model integration times ( T a = 2.0 ms) and the overshoot of the analysis step, the state cannot relax sufficiently quickly to account for the overshoot, resulting in anomalously large field values, which likewise must be accounted for in the next assimilation step. This model-assimilation feedback cycle presents further opportunities for assimilation filter investigations.

One possible way to improve the robustness of the reconstruction process is by extending the reconstruction to the model parameters in addition to the state vector. This approach is especially useful for model (parameter) identification in the absence of a known model (parameter set). We have opted to reserve this avenue of investigation for future work, both because a published model parameter set for this dataset and model exists25 and because our stochastic model inflation techniques have previously been shown to be adept at accounting for model errors (specifically those arising from mis-parameterizations).22 A thorough study of the identification of model parameters for cardiac electrical excitation models using data-assimilation techniques may provide useful advances in clinical applications.

Observation processing is also a more subtle art than it first appears. Optical mapping recordings effectively blur the position of the wavefront by recording fluorescence not only from the top-most layer of cells in the tissue, but the transferred fluorescing of excited layers of cells below the surface. This blurring manifests as, instead of a sharp wavefront, a leading edge that smoothly transitions from quiescent to excited across a width as large as 1 cm. Numerical post-processing to improve the signal-to-noise ratio of the data is typically achieved with a spatiotemporal convolution, which exacerbates this smoothing of the sharp features of the wavefront into a distribution of marginally excited cells. However, as the experimental data we use during constant pacing reaches steady state, stacking was used,40 which increases the signal-to-noise ratio without further spatial smoothing. This distribution of marginally excited cells cannot be reconstructed by a threshold excitable model—we rely on the assimilation scheme to effectively interpret the steep wavefronts generated by the ensemble models for a given smooth observed wavefront. We have considered the blurring of the state observations y b by an observation operator which does not merely sample the field at particular positions but computes the local average of the field through a fixed-width box-filter convolution. We anticipated that the convolution length scale would compete with the length scale of the localization process and likewise the convolution filter would compete with the filtering step of the LETKF, and expected an overall reduction in informational content about the ensemble as the elements of the l-dimensional y b are now substantially more correlated, harming the reconstruction by introducing an error into the ensemble observations which is essentially uncontrollable. We found that, in practice, the difference from the sampling based observation operator used in this work is minimal—cf. Fig. S3 in the Supplement for further details and results.

Likewise, the estimation of observation uncertainty for cardiac experimental recordings is an important topic that we have treated only superficially in this work. As the optical-mapping recordings and post-processing involve a spatial averaging that essentially smooths the steep wavefronts endemic to real excitable models, we may heuristically assign a nonlinear uncertainty to the observation data based on the approximate value of the observation, as in Sec. III B 2. Because there are effectively three “slow” states observed in the data for the transmembrane potential—quiescent, marginally excited, and fully excited—and we suppose the middle state is an apparent figment due to the measurement scheme, we may estimate the probability that the observation of a cell in that state should remain in that same state by the next observation time, equivalently, assimilation interval, and interpret this proportionality factor for the uncertainty of the observed transmembrane potential. For the Fenton–Karma model, we may map these observed slow states based on the range of u and model parameters, specifically u c < u i ( t ) < u c s i representing the marginally excited state. The uncertainty of observations outside this range should be determined by the noise floor of the observations. As the true values of u c and u c s i are undefined but may be assumed to be close to those used in this work, the smooth extension to all observation values is needed; heuristics for nonlinear observation uncertainty depending on the state leads to a lesser weighting (higher uncertainty) for observations of the wavefront and waveback. Precisely constructing these estimates requires building a statistical model of the observations, which is beyond the scope of this work. Our initial experimentation based on heuristics that smoothly increase the uncertainty of observations in the marginally excited region suggest that this subtle modification of uncertainty can have significant impacts on the accuracy and robustness of state reconstructions.

Finally, while we have focused on a particular dataset for the reconstruction task, we have computed reconstruction with related data ( BCL = 234 ± 2 ms) and different model parameter sets. For a slower BCL the conclusion is the same: the propagation of excitation information from the observations to the ensemble state vectors is slow and requires consistent work from the LETKF to correct, which manifests as peaked errors in the surface reconstructions which match each new excitation. For alternative model parameters, the periodicity is still matched but the difference in wave shape results in significant increases in reconstruction error. These experiments suggest that our approach is robust to peculiarities of the dataset and model specification. Further details are available in the Supplement, see Figs. S4–S8.

The specification of the LETKF includes several numerical parameters that we found to influence the efficacy of the assimilation program. The first is a floating-point parameter named “gross error,” designated by σ g, which controls whether an observation is sufficiently deviant from the background mean x ¯ b to be truncated, with the goal of maintaining a “light touch” for the assimilation and enhancing the stability of the scheme in the presence of large-deviations from the observations. For an observation-uncertainty pair ( y i o , η i ) and corresponding value of the background mean field u ¯ i b ( t ), if | y i o ( t ) u ¯ i b ( t ) | > σ g η i, then the corresponding element of ( y o H ¯ ( x b ) ) is overwritten by 0, i.e., the update of the analysis ensemble is invariant with respect to this observation. As the range of 0 u b 1 and the minimum of the uncertainty of the surface observations η _ = 0.05 are fixed, when σ g 20 every observation is used in every assimilation step. However, we find that setting the parameter below this saturation value (e.g., σ g = 10) can significantly affect the results, producing maximal surface reconstruction errors nearly 25 % lower than those in the saturated case ( σ g = 20). For this reason, we set σ g = 10 for all experiments in this work. In program logs, it is clear that fewer than 5 % of observations are ignored by assimilation step 50 ( 100 ms) for the autonomous experiment, which suggests that we are only filtering truly extreme values, and only in the initial iterations of the assimilation. In principle, controlling this parameter is akin to inflation – it is a measure of our relative confidence in the observations and the background ensemble members. It is not clear a priori what value this parameter should take, and because it affects the accuracy of the reconstruction, characterizing its effects should be a priority in future investigations.

Further, we have used η _ = 0.05 throughout this work, which was estimated from the full-width at half-maximum of the distribution of the observation values. While we found some sensitivity to the precise value of the uncertainty (e.g., experiments with η _ = 0.025 and η _ = 0.10 produced reconstructions with higher RMSE b overall), they are qualitatively similar to the presented results. A rigorous estimation of the observation uncertainty from the statistical properties of the dataset is beyond the scope of the present work but may reveal that the accuracy of the reconstruction may be improved through better estimation of the observation covariance matrix.

In this work, we have used “strongly coupled” DA (in analogy to the same term used in domain-decomposed models in weather forecasting) for the analysis update to the state variables for all experiments. We permit observations of the state u to affect the analysis update of the v and w fields. In contrast, “weakly coupled” DA communicates information between the state variables only through the model evaluation. Strongly coupled DA is expected to be able to extract more information from the same observations than weakly coupled DA.41 Strongly coupled DA permits more significant corrections to the state for our relatively sparse observation pattern, at the risk of over-driving the corrections to v and w in ways that are atypical for cardiac excitations. Restricting the analysis update due to observations of the u variable to the u variable only reduces the efficacy of the assimilation by preserving more ensemble state information in the analysis, while permitting larger ensemble spreads due to the uncontrolled dynamics of v and w. In numerical experiments, we found that in some scenarios we could drive the gating variables to their rest state values during the assimilation, when in model simulations of comparable dynamics the gating variables never attain their rest values. Open questions include whether the relatively weak coupling of the cardiac model is sufficient to ameliorate some of the challenges to robustness strongly coupled DA is expected to expose, or whether a weakly coupled DA approach unacceptably degrades the accuracy of the reconstructions. A systematic analysis of strongly and weakly coupled DA in the cardiac context would clarify the effect this choice has on robustness and accuracy and should be a focus of future work.

Relatedly, when we found that the LETKF was over-driving the gating variables and leading to physically unlikely values, we bounded the gating variables between their explicitly known limit values, 0 v a , w a 1. This bounding prevents the assimilation from creating unreachable analysis states that have little to do with the true dynamics—a hazard for what is effectively an initial condition for the model. Additionally, we found that a similar effect could be found in u a, where some ensemble members would be driven to exceedingly large-valued and hyper-localized corrections of the state leading to anomalously large ensemble spreads (i.e., SPRD u a 4.0), which should not be possible for typical values of u. To suppress this growth, we bounded 0.1 u a 1.5, which has the positive effect of bounding the ensemble spread. Further, while the limits on u a are sufficiently strong to bind the ensemble spread, they are not strong enough to prevent overshoot such that u a > 1 u o. However, it is presently unclear whether this is an innocuous addendum to the assimilation procedure to account for numerical instabilities in the model-LETKF interaction or symptomatic of a limitation of our approach. Nor are the constraints on the initial conditions to the model sufficiently well-defined mathematically to say these limits need never be adjusted for this or another model.

The observation localization is controlled by two parameters for space and one for time; the latter is set so that only the current time affects the assimilation as this is compatible with the clinical application and is computationally efficient. The spatial localization scales are isotropic, and we have not investigated the choice of different localization scales or anisotropies. In principle, given the density and low noise of surface observations, it may be computationally advantageous for their influence to extend through the depth of the tissue while retracting their overlap on the surface. However, this perspective is difficult to justify physically—indeed, it assumes a surface-dominant dynamics of the interior that is, at an abstract level, the phenomenological question we are considering in this work. Physically, the influence of each observation may extend to the spatial region that may be excited by the propagation of excitation information in the time between observations—that is, σ o , i c i T a D i for a quiescent state—making it wholly anisotropic, and, in fact, shortening the extent of influence on the thickness of the tissue compared to the effect along the surfaces, as D z = D D , by design. The corresponding fiber-informed localization scales are then related by the inverse conductivity anisotropy ratio, σ o , = ( D / D ) σ o , , and vary with spatial position. Likewise, this scaling informs the observation distribution—while we are limited to surface recordings of electrofluoresence, a consideration of localization from the properties unique to cardiac excitation may benefit the cardiac researcher.

Finally, we have focused on the surface error of the state reconstructions and appealed to CRPS for a measurement of ensemble consistency in the absence of interior observations or a “truth” state. The choice of this measure is motivated by its convergence properties with respect to ensemble size and its simple interpretation rather than its particular relevance to cardiac state reconstruction. Alternative error measures have been used in other works, e.g., threshold-based error,21 which makes reference to an unknown truth state, a prescribed threshold value, and knowledge of the dynamical properties of the model. Constructing a fair (in the sense of CRPS) measure of unknown state feature error requires the construction of a statistical model of the dynamics of the state itself, against which we might compare the expectations of the reconstructed state. This effort would also aid in physically motivated estimation of observation uncertainties, as mentioned above.

The creation of a statistical model of the dynamics for the state reconstruction problem is precisely the approach taken by researchers in the machine-learning literature, which has shown promising results. Ref. 42 investigated the use of a long short-term memory recurrent neural network, autoencoder, and diffusion artificial neural network (ANN) models for the 3D reconstruction problem for a simple excitable model, while varying the history available to the model and the layer depth of the reconstruction. The central results of this approach are encouraging, especially for shallow depths (close to the observations) and long histories. Notably, considering the parsimony of their observations ( l / N T 8.3 × 10 3, for 1 T 32) compared to this work ( l / N 1.5 × 10 3), the authors found similar reconstruction limitations to our approach for a remarkably different dynamical model and geometry. Ref. 43 investigated the reconstruction problem for the Aliev-Panfilov model using a U-net architecture ANN with similar considerations to the informational content of the observations as our work. In particular, the authors found that simultaneous observations of both the top and bottom layers of the excitable medium produced more reliable reconstructions than single-surface observations, but that “projection” observations—those integrating over the depth of the tissue—worked even better. Their results using the U-net ANN architecture in the “laminar” dynamics regime (an isolated scroll wave with size comparable to the depth of the tissue) are comparable to previous in silico results for a similar dynamical pattern produced by a different cardiac excitation model in conjunction with LETKF and stochastic inflation.22 The authors also found that the encoding of depth information into the observations was essential for the success of their methods, which limits its application to ventricular fibrillation.43 

The combination of statistical model (machine-learning) and dynamical model (data assimilation) approaches to reconstruction is an active avenue of research. One of the central pragmatic limitations of ensemble data assimilation approaches is that storage requirements grow as O ( M N ) while computation (for LETKF32) grows as O ( M 2 N ); replacing some of the ensemble members with a pre-trained machine learning model could reduce both costs. Likewise, machine-learning models require training and do not offer an estimate of their prediction uncertainty; coordination within a data assimilation framework could alleviate the latter issue with existing infrastructure, while hot-swapping dynamical models with machine-learning models could provide relief for the former. In particular, we have found that it may take up to 250 ms for the ensemble to settle into a good approximation of the true uncertainty of this system; machine learning may provide an accelerated convergence path through better estimates for the initial state covariance, or ensemble state initial conditions based on extensive training data.

In this work, we have performed several assimilation experiments using physical data with variations to the model and observations. We have found that the assimilation of experimental observations for cardiac reconstruction brings unique challenges compared to the assimilation of model-produced observations. The leading contributors to difficulty for state reconstruction are the type of dynamics under study, i.e., non-autonomous stimulated dynamics, model error due to mis-parameterization, the presence of synchronizing inputs (i.e., the stimulus current), and the estimation of uncertainties for experimental observations.

When the model generates fundamentally different types of dynamics compared to that described by the experimental data, we rely on significant corrections from the LETKF. These large corrections, in turn, increase the likelihood of generating physically unrealizable states. In the cardiac context, unphysical behavior may manifest as an excitation that spreads significantly faster than the model permits for our parameter setting—thus, the reproduction of the wave speed by the model is an important feature for the reproduction of the state dynamics.

An important issue, only briefly touched upon in this paper, is the tuning of model parameters to account for data features. Particularly relevant for this dataset is the ability to reproduce alternans—both in the duration (APD) and amplitude (APA) of excitations. While APD alternans are readily reproducible with the Fenton–Karma model, APA alternans are not a common feature in this model. Modifications to the dynamics or model parameters to reproduce APA alternans in the Fenton–Karma model may be designed but are outside the scope of the present work.

Model error forms a dominant contribution to the reconstruction error on the surfaces and has a significant effect on the interior spread of the ensemble and the confidence in the reconstruction of the interior. The CRPS a may give an overly optimistic estimate of the confidence in the reconstruction accuracy in the interior due to synchronization from the periodicity of the observations over time. Further methods of assessing reconstruction performance without recourse to observations are necessary.

Our data assimilation scheme, while effective for reconstructing the state from model-derived observations with the same distribution pattern in use in the present study, struggles with the experimental data in use in this work. We suspect that this difficulty reflects a fundamental difference between our previous three-dimensional reconstruction efforts and the current dataset, which is the non-autonomous nature of the observed dynamics. Data assimilation experiments with experimental data showcasing autonomous dynamics—i.e., a spiral or scroll wave—would present an interesting and informative extension of this study.

In conclusion, we have found that we can reliably reconstruct the surface dynamics of a driven excitable state from experimental observations on the surfaces but that reliable reconstruction of the interior requires further efforts to understand our dataset and how it interacts with the dynamics of excitable models. We have found that while the inclusion of explicit modeling of the stimulation current or stochastic effects may yield subtle improvements to the reconstruction, the coupling of the LETKF, observations, and the model is sufficiently close that it does not correct the leading error caused in the reconstruction of this dataset. On the surfaces, these errors are dominated by poor model fit to the excitation wavefronts and wavebacks. In the interior, it is the lack of propagation of information from the surfaces, such that the interior becomes a fount of uncertain excitation. Likewise, we have determined that incorporating additional modeling insight into the features of the observations, whether through the identification of a coherent wave pattern and using it to constrain the uncertainty of the interior dynamics or through modeling of the uncertainty associated with the observations, is subtle and produces new challenges which, at this initial stage, do not appear to improve on the low-information approach.

We have included two video files in the supplementary material. The first, auto_ensmeans.mp4, displays the dynamics of u ¯ b ( t , x , y , z ) and u ¯ a ( t , x , y , z ) for z / d = 0 , 0.2 , 0.4 , 0.6 , 0.8 , 1.0 and emphasizes the difference due to the assimilation, u ¯ a − u ¯ b, and the observations u o for the autonomous model experiment. The second, auto_b.mp4, displays the dynamics of u ¯ b ( t , x , y , z ), v ¯ b ( t , x , y , z ), and w ¯ b ( t , x , y , z ) over time for z / d = 0 , 0.2 , 0.4 , 0.6 , 0.8 , 1.0 for the autonomous model experiment. We have also included a short supplement that includes results for a number of experiments referenced throughout the text.

We thank Alessio Gizzi for providing access to the data used for this study. M.J.H. and E.M.C. were supported by the NSF under Grant Nos. CMMI-2011280 and CMMI-1762803. F.H.F. was supported by the NSF under Grant No. CMMI-1762553. C.D.M., F.H.F., and E.M.C. were supported by the NIH under Grant No. 1R01HL143450-01. This research was supported, in part, through research infrastructure resources and services provided by the Partnership for an Advanced Computing Environment (PACE) at the Georgia Institute of Technology, Atlanta, Georgia, USA.44 This work has likewise made use of the Hamilton HPC Service of Durham University. All figures in this report were generated using the Makie.jl45 software package.

The authors have no conflicts to disclose.

C. D. Marcotte: Conceptualization (lead); Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (equal); Project administration (supporting); Resources (equal); Software (equal); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). M. J. Hoffman: Conceptualization (equal); Funding acquisition (equal); Methodology (equal); Project administration (supporting); Software (equal); Supervision (supporting); Validation (supporting); Writing – review & editing (supporting). F. H. Fenton: Conceptualization (supporting); Funding acquisition (lead); Investigation (supporting); Methodology (supporting); Software (supporting); Supervision (supporting); Writing – review & editing (supporting). E. M. Cherry: Conceptualization (lead); Funding acquisition (lead); Methodology (equal); Project administration (lead); Resources (equal); Software (equal); Supervision (lead); Validation (supporting); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
G.
Evensen
, “
Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics
,”
J. Geophys. Res.: Oceans
99
,
10143
10162
, https://doi.org/10.1029/94JC00572 (
1994
).
2.
L.
Nerger
,
T.
Janjić
,
J.
Schröter
, and
W.
Hiller
, “
A unification of ensemble square root Kalman filters
,”
Mon. Weather Rev.
140
,
2335
2345
(
2012
).
3.
L.
Nerger
,
T.
Janjić
,
J.
Schröter
, and
W.
Hiller
, “
A regulated localization scheme for ensemble-based Kalman filters
,”
Q. J. R. Meteorol. Soc.
138
,
802
812
(
2012
).
4.
J.
Tödter
,
P.
Kirchgessner
,
L.
Nerger
, and
B.
Ahrens
, “
Assessment of a nonlinear ensemble transform filter for high-dimensional data assimilation
,”
Mon. Weather Rev.
144
,
409
427
(
2016
).
5.
P.
Kirchgessner
,
J.
Tödter
,
B.
Ahrens
, and
L.
Nerger
, “
The smoother extension of the nonlinear ensemble transform filter
,”
Tellus A
69
,
1
15
(
2017
).
6.
S.
Vetra-Carvalho
,
P. J.
van Leeuwen
,
L.
Nerger
,
A.
Barth
,
M. U.
Altaf
,
P.
Brasseur
,
P.
Kirchgessner
, and
J.-M.
Beckers
, “
State-of-the-art stochastic data assimilation methods for high-dimensional non-Gaussian problems
,”
Tellus A
70
,
1
43
(
2018
).
7.
L.
Nerger
, “
Data assimilation for nonlinear systems with a hybrid nonlinear Kalman ensemble transform filter
,”
Q. J. R. Meteorol. Soc.
148
,
620
640
(
2022
).
8.
E.
Cosme
,
J.-M.
Brankart
,
J.
Verron
,
P.
Brasseur
, and
M.
Krysta
, “
Implementation of a reduced rank square-root smoother for high resolution ocean data assimilation
,”
Ocean Model.
33
,
87
100
(
2010
).
9.
U. K.
Dash
,
S.-Y.
Park
,
C. H.
Song
,
J.
Yu
,
K.
Yumimoto
, and
I.
Uno
, “
Performance comparisons of the three data assimilation methods for improved predictability of PM2 5: Ensemble Kalman filter, ensemble square root filter, and three-dimensional variational methods
,”
Environ. Pollut.
322
,
121099
(
2023
).
10.
V.
Rao
,
A.
Sandu
,
M.
Ng
, and
E. D.
Nino-Ruiz
, “
Robust data assimilation using L 1 and Huber norms
,”
SIAM J. Sci. Comput.
39
,
B548
B570
(
2017
).
11.
H.
Hersbach
, “
Decomposition of the continuous ranked probability score for ensemble prediction systems
,”
Weather Forecast.
15
,
559
570
(
2000
).
12.
J.
Bröcker
, “
Evaluating raw ensembles with the continuous ranked probability score
,”
Q. J. R. Meteorol. Soc.
138
,
1611
1617
(
2012
).
13.
J.
Thorey
,
V.
Mallet
, and
P.
Baudin
, “
Online learning with the continuous ranked probability score for ensemble forecasting
,”
Q. J. R. Meteorol. Soc.
143
,
521
529
(
2017
).
14.
M.
Leutbecher
and
T.
Haiden
, “
Understanding changes of the continuous ranked probability score using a homogeneous Gaussian approximation
,”
Q. J. R. Meteorol. Soc.
147
,
425
442
(
2021
).
15.
D. J.
Lea
,
I.
Mirouze
,
M. J.
Martin
,
R. R.
King
,
A.
Hines
,
D.
Walters
, and
M.
Thurlow
, “
Assessing a new coupled data assimilation system based on the met office coupled Atmosphere–Land–Ocean–Sea ice model
,”
Mon. Weather Rev.
143
,
4678
4694
(
2015
).
16.
M.
Goodliff
,
T.
Bruening
,
F.
Schwichtenberg
,
X.
Li
,
A.
Lindenthal
,
I.
Lorkowski
, and
L.
Nerger
, “
Temperature assimilation into a coastal ocean-biogeochemical model: Assessment of weakly and strongly coupled data assimilation
,”
Ocean Dyn.
69
,
1217
1237
(
2019
).
17.
S. G.
Penny
,
E.
Bach
,
K.
Bhargava
,
C.-C.
Chang
,
C.
Da
,
L.
Sun
, and
T.
Yoshida
, “
Strongly coupled data assimilation in multiscale media: Experiments using a quasi-geostrophic coupled model
,”
J. Adv. Model. Earth Syst.
11
,
1803
1829
(
2019
).
18.
Q.
Tang
,
L.
Mu
,
H. F.
Goessling
,
T.
Semmler
, and
L.
Nerger
, “
Strongly coupled data assimilation of ocean observations into an ocean-atmosphere model
,”
Geophys. Res. Lett.
48
,
e2021GL094941
(
2021
).
19.
N. S.
LaVigne
,
N.
Holt
,
M. J.
Hoffman
, and
E. M.
Cherry
, “
Effects of model error on cardiac electrical wave state reconstruction using data assimilation
,”
Chaos
27
,
093911
(
2017
).
20.
M. J.
Hoffman
,
N. S.
LaVigne
,
S. T.
Scorse
,
F. H.
Fenton
, and
E. M.
Cherry
, “
Reconstructing three-dimensional reentrant cardiac electrical wave dynamics using data assimilation
,”
Chaos
26
,
013107
(
2016
).
21.
M. J.
Hoffman
and
E. M.
Cherry
, “
Sensitivity of a data-assimilation system for reconstructing three-dimensional cardiac electrical dynamics
,”
Phil. Trans. R. Soc. A
378
,
20190388
(
2020
).
22.
C. D.
Marcotte
,
F. H.
Fenton
,
M. J.
Hoffman
, and
E. M.
Cherry
, “
Robust data assimilation with noise: Applications to cardiac dynamics
,”
Chaos
31
,
013118
(
2021
).
23.
A.
Gizzi
,
E.
Cherry
,
R. F.
Gilmour
, Jr.,
S.
Luther
,
S.
Filippi
, and
F. H.
Fenton
, “
Effects of pacing site and stimulation history on alternans dynamics and the development of complex spatiotemporal patterns in cardiac tissue
,”
Front. Physiol.
4
,
71
(
2013
).
24.
A.
Loppini
,
A.
Gizzi
,
C.
Cherubini
,
E. M.
Cherry
,
F. H.
Fenton
, and
S.
Filippi
, “
Spatiotemporal correlation uncovers characteristic lengths in cardiac tissue
,”
Phys. Rev. E
100
,
020201
(
2019
).
25.
A.
Barone
,
M. G.
Carlino
,
A.
Gizzi
,
S.
Perotto
, and
A.
Veneziani
, “
Efficient estimation of cardiac conductivities: A proper generalized decomposition approach
,”
J. Comput. Phys.
423
,
109810
(
2020
).
26.
A.
Barone
,
A.
Gizzi
,
F.
Fenton
,
S.
Filippi
, and
A.
Veneziani
, “
Experimental validation of a variational data assimilation procedure for estimating space-dependent cardiac conductivities
,”
Comput. Methods Appl. Mech. Eng.
358
,
112615
(
2020
).
27.
F.
Fenton
and
A.
Karma
, “
Vortex dynamics in three-dimensional continuous myocardium with fiber rotation: Filament instability and fibrillation
,”
Chaos
8
,
20
47
(
1998
).
28.
M. S.
Spach
,
W. T.
Miller
,
D. B.
Geselowitz
,
R. C.
Barr
,
J. M.
Kootsey
, and
E. A.
Johnson
, “
The discontinuous nature of propagation in normal canine cardiac muscle. Evidence for recurrent discontinuities of intracellular resistance that affect the membrane currents
,”
Circ. Res.
48
,
39
54
(
1981
).
29.
I.
Kotadia
,
J.
Whitaker
,
C.
Roney
,
S.
Niederer
,
M.
O’Neill
,
M.
Bishop
, and
M.
Wright
, “
Anisotropic cardiac conduction
,”
Arrhythmia Electrophysiol. Rev.
9
,
202
210
(
2020
).
30.
M.
Perego
and
A.
Veneziani
, “
An efficient generalization of the Rush-Larsen method for solving electro-physiology membrane equations
,”
Electron. Trans. Numer. Anal.
35
,
234
256
(
2009
).
31.
E. M.
Cherry
and
F. H.
Fenton
, “
Visualization of spiral and scroll waves in simulated and experimental cardiac tissue
,”
New J. Phys.
10
,
125016
(
2008
).
32.
B. R.
Hunt
,
E. J.
Kostelich
, and
I.
Szunyogh
, “
Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter
,”
Phys. D: Nonlinear Phenom.
230
,
112
126
(
2007
).
33.
R. E.
Kalman
et al., “
A new approach to linear filtering and prediction problems
,”
J. Basic Eng.
82
,
35
45
(
1960
).
34.
T.
Gneiting
and
A. E.
Raftery
, “
Strictly proper scoring rules, prediction, and estimation
,”
J. Am. Stat. Assoc.
102
,
359
378
(
2007
).
35.
M.
Leutbecher
, “
Ensemble size: How suboptimal is less than infinity?
,”
Q. J. R. Meteorol. Soc.
145
,
107
128
(
2019
).
36.
I.
Idris
and
V.
Biktashev
, “
Critical fronts in initiation of excitation waves
,”
Phys. Rev. E
76
,
021906
(
2007
).
37.
R. D.
Simitev
and
V. N.
Biktashev
, “
Conditions for propagation and block of excitation in an asymptotic model of atrial tissue
,”
Biophys. J.
90
,
2258
2269
(
2006
).
38.
C. D.
Marcotte
and
V. N.
Biktashev
, “
Predicting critical ignition in slow-fast excitable models
,”
Phys. Rev. E
101
,
042201
(
2020
).
39.
In our experiments, these values not only yield faster wave propagation but also strongly drive the ensemble states to quiescence, dramatically worsening the overall reconstruction with (trivial) ensemble collapse. Even when including the stimulating current into the model dynamics, the reconstruction error is significantly worse (not shown). Hence, we use more physiologically reasonable values for the tissue conductivity in this work.
40.
I.
Uzelac
and
F. H.
Fenton
, “Robust framework for quantitative analysis of optical mapping signals without filtering,” in 2015 Computing in Cardiology Conference (CinC) (IEEE, 2015), pp. 461–464.
41.
T. C.
Sluka
,
S. G.
Penny
,
E.
Kalnay
, and
T.
Miyoshi
, “
Assimilating atmospheric observations into the ocean using strongly coupled ensemble data assimilation
,”
Geophys. Res. Lett.
43
,
752
759
, https://doi.org/10.1002/2015GL067238 (
2016
).
42.
R.
Stenger
,
S.
Herzog
,
I.
Kottlarz
,
B.
Rüchardt
,
S.
Luther
,
F.
Wörgötter
, and
U.
Parlitz
, “
Reconstructing in-depth activity for chaotic 3D spatiotemporal excitable media models based on surface data
,”
Chaos
33
,
013134
(
2023
).
43.
J.
Lebert
,
M.
Mittal
, and
J.
Christoph
, “
Reconstruction of three-dimensional scroll waves in excitable media from two-dimensional observations using deep neural networks
,”
Phys. Rev. E
107
,
014221
(
2023
).
44.
PACE, Partnership for an Advanced Computing Environment (PACE) (2017).; available at https://www.pace.gatech.edu/sites/default/files/PACE.bib.
45.
S.
Danisch
and
J.
Krumbiegel
, “
Makie.jl: Flexible high-performance data visualization for Julia
,”
J. Open Source Softw.
6
,
3349
(
2021
).
Published open access through an agreement with JISC Collections

Supplementary Material