The reconstruction of electrical excitation patterns through the unobserved depth of the tissue is essential to realizing the potential of computational models in cardiac medicine. We have utilized experimental optical-mapping recordings of cardiac electrical excitation on the epicardial and endocardial surfaces of a canine ventricle as observations directing a local ensemble transform Kalman filter data assimilation scheme. We demonstrate that the inclusion of explicit information about the stimulation protocol can marginally improve the confidence of the ensemble reconstruction and the reliability of the assimilation over time. Likewise, we consider the efficacy of stochastic modeling additions to the assimilation scheme in the context of experimentally derived observation sets. Approximation error is addressed at both the observation and modeling stages through the uncertainty of observations and the specification of the model used in the assimilation ensemble. We find that perturbative modifications to the observations have marginal to deleterious effects on the accuracy and robustness of the state reconstruction. Furthermore, we find that incorporating additional information from the observations into the model itself (in the case of stimulus and stochastic currents) has a marginal improvement on the reconstruction accuracy over a fully autonomous model, while complicating the model itself and thus introducing potential for new types of model errors. That the inclusion of explicit modeling information has negligible to negative effects on the reconstruction implies the need for new avenues for optimization of data assimilation schemes applied to cardiac electrical excitation.
Typical cardiac electrophysiology experiments record excitation data with optical cameras that are tuned to measure fluorescence of voltage-sensitive dyes and are only able to observe excitation patterns on the surfaces of tissues. We know from theoretical and computational evidence that the details of excitation patterns through the depth of tissues are important for prediction and control, while those observable on the surface are a very small portion of the overall excitation structure. These patterns may be pro-arrhythmic due to structures that are unobservable near the surfaces of the tissue, which can pin organizing features of the dynamics and thus simplify the excitation patterns. As the interiors of the tissue are inaccessible using traditional experimental means, we turn to reconstructing the interior using computational methods—specifically, data assimilation.
I. INTRODUCTION
Experiments in cardiac excitation dynamics provide only a limited window into a very complex system. Micro-electrode recordings trade spatial resolution for temporal resolution, electrocardiograms eschew cellular detail for organ-level holism, and optical-mapping experiments focus on the electrical excitations near the exposed surfaces of a tissue whose internal dynamics may be much more complex. Data assimilation (DA) promises to address the difficulties of the latter; with a reliable dynamical model and sufficiently precise observations of the experimental state, the techniques of data assimilation aim to reconstruct the unobserved interior dynamics of the tissue, subject to the observations at the surface(s). Importantly, the techniques of data assimilation are robust with respect to uncertainty as the uncertainty of the truth state and of the observation values, as well as of the specificity of model predictions, are all ever-present when working with real experimental data.
Data assimilation has co-developed with the weather and climate forecasting communities for whom the tools are often specialized and well-tuned. Ensemble methods, in particular, are relative newcomers to the application1 but are favored due to their simpler relationship to nonlinear dynamical models. Numerous studies have developed new data assimilation methods and compared them to existing approaches;2–7 developed innovations for improving the performance8,9 and robustness of results from data assimilation;10 defined fair scoring rules for the rigorous definition of “improving” data assimilation results;11–14 and even considered such subtleties as the connection between the observations, assimilation filter, and the model.15–18
Significant efforts in the assimilation of cardiac excitation patterns in three-dimensional tissue have focused on the foundational aspects of the problem—data distributions, uncertainty quantification, model error, and robustness with respect to noise19–21—in the context of synthetic observations generated by a model, rather than from an experiment. In previous work,22 we have found a sensitivity to some parameter uncertainties in the accuracy of local ensemble transform Kalman filter (LETKF) reconstructions, as well as improved robustness and accuracy with stochastic formulations of the underlying model. This work considers the efficacy of our reconstruction schemes in the case of experimental data typical of the field: periodic pacing of a tissue, with excitations captured by optical mapping on the epicardial and endocardial surfaces of the tissue.
Throughout this paper, we will be using a dataset previously published in experimental studies.23–26 The dataset corresponds to activation patterns subject to a localized, periodic, current stimulation in canine ventricular tissues. We will investigate the use of both autonomous and non-autonomous models for the reconstruction of these experimental observations, and the effect of stochastic model interventions on the accuracy of the reconstructions. Finally, we investigate the effect of additional synthetic observations generated in the interior of the tissue on the robustness of the assimilation and accuracy of the reconstruction of experimental dynamics.
II. METHODS
In this section, we outline the mathematical details of the dynamical model, the determination of reasonable model parameters, the generation of observations from the experimental data, and, finally, the local ensemble transform Kalman filter (LETKF) used in the assimilation of the observations for the reconstruction of the experimental dynamics.
A. Model
We use the model parameters defined in Ref. 26 for this model and experimental dataset. These parameters are quite different from other parameter sets used with this model for physically relevant dynamics; cf. Ref. 27. However, for the present investigation, we have opted to consider the smallest model errors to better determine the effect of changing the model and observation properties.
In all experiments, we use a Rush–Larson time-stepping scheme for the gating variables and combined with a forward-Euler method for the transmembrane potential .30 When we incorporate stochastic effects (cf. Sec. III A 4), we adapt the forward-Euler method for the transmembrane potential to an Euler–Maruyama method. In this work, we do not incorporate stochastic terms in the dynamics of the gating variables, as previous work demonstrated that similar ensemble inflation effectiveness can be achieved with only stochastic currents for this model,22 which makes the implementation simpler and more performant.
B. Observation extraction
The experimental data comprises optical recordings of fluorescence (cf., Refs. 23 and 31 for details of the original preparation) corresponding to transmembrane potential, which have been smoothed and denoised. The observations ( ) are a vector of tuples ( ) containing a tag designating the observed field variable, coordinates in the domain ( ), the observed field variable at these coordinates [e.g., ], and the uncertainty of the observation value ( ) for each observation. The observed field values are also normalized such that . As the observations only cover the transmembrane potential , we may use and interchangeably to denote the values of the observations. The recordings cover response excitations for canine ventricle tissues subject to a periodic stimulus applied from a single position, across a large range of stimulus periods or basic cycle lengths (BCLs).
For the purposes of this study, we have focused on a dataset with stimulation period of ms, as the patterns generated with this short stimulation period are the most irregular, spatially and temporally, and thus present the most challenging reconstruction task. An example of the recordings from this dataset is shown in Fig. 1, detailing (a) the epicardial and (b) the endocardial surface recordings, along with (c) temporal traces of the recording at the center-points of the tissue surfaces. The figure likewise shows an inset cm square used for the generation of the observation sets for the assimilation experiments, as well as the boundaries for the epicardial and endocardial surfaces. The recording features a very fast wavefront propagating from figure-upper-left to figure-lower-right, nearly simultaneously on both surfaces [c.f. Fig. 1(c)] from a stimulus current applied to the base of the endocardium.23
Over time, these data display the alternans typical of cardiac tissue excitation paced faster than the normal rhythm, namely, variation in the duration and amplitude of the action potential. We generate observations from the dataset by constructing a cubic interpolant over space with linear interpolation over time, for both the epicardial and endocardial datasets, and sampling into this interpolant for pre-specified coordinates corresponding to the inset domain shown as dashed squares in Figs. 1(a) and 1(b). In the case of synthesized observations generated for the interior of the domain (cf. Sec. III B 1), we linearly interpolate through the depth between the values of the observations at the projected positions on the surface to determine the value of the synthesized observation in the interior. This ensures that we are propagating information from the surfaces into the mid-depth.
Each is likewise assigned an uncertainty , which we identify as or of the action potential amplitude range by default. The value of was estimated from the full-width at half-maximum of the distribution of for the dataset, with the assimilation interval. See Fig. S2 for a detailed motivation of this computation. In the case of synthetic observations interpolated into the mid-depth of the tissue (cf. Sec. III B 1), we assign a larger uncertainty, , to account for the perturbative effect of these synthetic observations on the assimilated state since the diagonal elements of the observation covariance are related to the inverse, ; i.e., they contain no new information from the experimental data other than a highly uncertain suggestion that the mid-plane is excited or unexcited at a particular place and time. In the case of encoding uncertainty about the precise form of the wavefronts (cf. Sec. III B 2), we perform a similar modification of the observation uncertainty using a nonlinear function of the observation field value , which is bounded by and peaks at for .
C. LETKF
The ETKF applies the analysis update in the span of the background ensemble states. The analysis ensemble covariance is expressed as , where and the analysis ensemble mean . The scalar factor is the (uniform, constant) multiplicative covariance inflation factor. The mean weights are determined by . The weights of the analysis ensemble mean deviations are similarly expressed in the basis of the background mean deviations, and . This deviates from the error subspace transform Kalman filter (ESTKF),2 which performs the transformation in the ( )-dimensional subspace corresponding to the span of .
The LETKF extends this analysis update in the space of the background ensemble locally—in the sense of observation influence and covariance. The inclusion of observation and covariance localization applies the preceding analysis to each element of the state vector, where the inclusion of observations for a particular state vector element is determined by the localization procedure. In this work, we use an isotropic localization weighting controlled by multiplying the background covariance matrix by a decaying fifth-order function, which goes to zero at a distance of , where we have retained ( spatial units, or cm) for consistency with our earlier work. We refer the reader to Refs. 20, 22, and 32 for further mathematical details of the implementation of the localization procedure.
Finally, in lieu of an optimized multiplicative and additive inflation scheme, we used multiplicative inflation only. The LETKF utilizes multiplicative inflation of the analysis ensemble deviations ( ) with a fixed value of . We did not utilize additive inflation in this work, despite previous results that showed it to be effective22 in the expectation that our stochastic approach would similarly affect the ensemble while requiring substantially less disk space.
III. RESULTS
A. Influence of model variations on reconstruction accuracy
We have run several experiments for the reconstruction of cardiac excitation patterns. In the simplest case, we compute the baseline accuracy of the reconstruction procedure by incorporating information about the stimulus pattern in the observations into the model dynamics, without any assimilation procedure. Additionally, we consider a strong assimilation process without this stimulus information to assess the importance of modeling the non-autonomous forcing inherent to the observation data for the reconstruction accuracy. Finally, we incorporate the stimulus forcing into the model dynamics in concert with a strong assimilation procedure; this approach maximizes the information supplied to the assimilation and provides a clear measure of the importance of non-autonomous model information to the reconstruction reliability.
1. Free-run
To begin, we ask to what degree the assimilation procedure may be responsible for the reconstruction of the state compared with the averaging procedure of the ensemble and the synchronization of the ensemble states with the stimulus cadence. We solely assess the uncontrolled dynamics of the ensemble members with the stimulus forcing (as, in the absence of stimulus forcing, the ensemble member states all return to quiescence) for the reconstruction problem. To this end, we report the reconstruction results with the assimilation update excised- –equivalently, the observations are not permitted to affect any of the state variables—where the model is stimulated periodically to match the stimulus observed in the observations.
In the free-run setting, we perform no assimilation by explicitly filtering all observations from the analysis computation, so that the LETKF reduces to a scaling-permutation operator as the ensemble index is not guaranteed to be preserved, and . The free-run experiment serves as an effective method of distinguishing the improvement due to the LETKF and the stimulus model in the reconstruction over long assimilation times.
The experiment that produced the dataset used in this study uses a periodic biphasic stimulation, applied outside of the assimilated domain at the base of the endocardium. The stimulus current model, Eq. (13), is monophasic and independent of depth, which is not the same as the experimental stimulus current. Rather, the our stimulus current model approximates the effect of the experimental stimulus on the observations within the assimilated domain; i.e., it is a data-driven model, and only approximates the observed effects. As we only observe the effect of the experimental current some distance from its direct point of application, we effectively filter the experimental stimulus current through the tissue observations to construct our stimulus current model. Throughout, we set ms , with the duration of the stimulus set to ms, and set the spatial scale of the stimulus current to to match that observed in the observation data. The period between successive stimuli is ms, likewise matching the observation data. Tests were performed to assess the impact of the stimulus current parameters on the state, which were found to tall into two classes: (a) sub-critical (non-excitatory) and (b) super-critical (excitatory) stimuli, which showed no meaningful variation for physically realistic values within each class. This critical excitation phenomenon is well-understood in the one-dimensional case.36–38
In Fig. 2, we show the , the , and the spread through the depth of the tissue for the free-run experiment using the Barone et al. parameter set. The and indicate significant disagreement between the background ensemble and the observations, as well as in comparison to the analysis ensemble mean. The indicates that the ensemble does not accurately reconstruct the surface dynamics in the absence of assimilation, as expected. Furthermore, the variability of decreases over time as the forcing of the stimulus current synchronizes the large-scale features of the ensemble states as the information about the ensemble initialization is slowly lost. Likewise, the is nearly invariant with respect to , indicating that we correctly restore the -invariance symmetry of the ensemble states when the observations are not present in the reconstruction procedure.
These results serve as a baseline for comparing with the experiments to follow. For the free-run experiment, we have used our unmodified observation set and standard ensemble initialization, with the stimulus model and absent any assimilation. The ensemble means therefore do not correspond to a reconstruction, as there are no guiding observations, and so we do not show the mean fields.
2. Autonomous model
The first experiment uses an autonomous, deterministic, cardiac excitation model for the ensemble members in the assimilation of these observations to consider the significance of model error arising from model parameters which may or may not effectively reproduce the observed action potentials. Instead of adapting the model to every stimulus protocol, it is dramatically simpler to use the properties of the Kalman filter to match the periodicity of the stimulation without explicit recourse to a modeled stimulus pattern. This approach has the obvious limitation that we have rejected a potential source of additional information and introduced latency into the assimilation, as the ensemble can only react to the next stimulus after it has already appeared in the observations. Given this restriction, this experiment shows the worst-case, “low-information,” estimate of the state of the system using the LETKF assimilation infrastructure. We refer to this experiment as “autonomous.”
Figure 3 shows the (a) , (b) , and (c) metrics over time for the reconstruction using the Fenton–Karma model with the Barone et al. parameter set. The assimilation does a very good job of reconstructing the surfaces of the state to match the observations after the initial transient, which arises from the phase mismatch of the initial ensemble and the observations, producing high initial error. The is strongly correlated with the , as expected, and remains small over long time scales, growing slowly and oscillating due to the underlying excitation period. This behavior is due to the lag inherent to this assimilation experiment, due to the information of the next excitation needing to propagate through the full LETKF before affecting the background ensemble. The dominant contribution to the corresponds to a peak occurring during the full excitation of the domain, with fast relaxation of the error as the excitation amplitude dissipates. One interpretation of this contribution is that on the surfaces, the LETKF struggles with the uncertainty of the wavefront and deviation of the recorded wavefront shape from those generated by the nonlinear model for the background ensemble (the model lags the observed wavefront, by construction) but manages to reproduce the waveback accurately. Indeed, the innovation of the LETKF is large during the propagation of the wavefront—the LETKF is correcting a slow CV in the model to match the excitation pattern observed in the data—and relatively smaller during the propagation of the waveback. The spread for this parameter set begins small and quickly grows in the interior, representing the dominant contribution of the uncertainty of the state of the system, as expected. As the information about the excitation propagates from the surfaces toward the interior of the domain, it is clear that the LETKF is unable to constrain the interior of the solution using only the surface observations. That said, the maximum of the spread at saturates after ms, indicating that this time period reflects the true uncertainty of the interior.
On the surfaces, after an excitation, the next action potential (AP) is driven by the assimilation perturbing the state in an appropriate region to cause a new propagating wave that matches those in the observations, as expected. Additionally, as the different parameter sets generate APs with different wave speeds ( ), the assimilation continually corrects the propagation directly ahead of the wavefront. When the wave is slightly too slow, this correction is effective as the region preceding the wavefront is expected to be excitable; thus, small perturbations ahead of the wave create a new front that connects to the existing lagged wavefront before the next assimilation time. When the wave is slightly too fast, this correction is ineffective as reducing the value of by a small amount on the wavefront does not significantly hinder the propagation—doing so requires a larger, quenching, perturbation related to the gap between the middle unstable nullcline of and the rest state. This asymmetry in effective control is apparent on the surfaces because of the presence of observations, and likewise means uncontrolled wave speed errors propagate in the interior.
The phenomenology of the assimilation dynamics is exemplified by Fig. 4, which shows -slices of the background ensemble mean at fixed depths for three sample times from the autonomous experiment. Near the beginning of a new excitation in the observations, cf. Fig. 4(a), the ensemble mean is a very good approximation of the distribution of the excitation at the surfaces ( ) while the interior slices are not reasonable approximations of an interpolating function through the depth. Rather, the interior is too uncertain to match the propagation of a wave across the domain, which we expect to pass uniformly in , in time with the surfaces.
Furthermore, these slices highlight a significant contribution to the model error: the timing of the front in the ensemble is lagged compared to the observation data. This behavior is due to the interaction of three phenomena related to the timing of the periodicity of the observations and how these synchronize the ensemble dynamics.
The first and most significant is the slower conduction velocity of the model than the observation data. Despite being manually tuned to the data,26 the parameter set is unable to adequately match the observed front speed without a significant overestimate of the tissue conductivity (cf. Ref. 26, their and )39 and thus any timing of the excitation wave in the ensemble with the observations becomes a significant, ongoing, correction for the LETKF. Previous in silico studies have found the LETKF to adequately address a variety of systemic model errors in 3D, including fiber angle rotation, tissue conductivity, and the explicit time scales of the model.20–22 Ahead of the wavefront, the LETKF can readily excite the leading edge of quiescent tissue in the ensemble member states, speeding up the front to match the excitation propagation in the observations, provided there are ensemble members that are already excited in that leading region. Due to the slower conduction velocity, this scenario is rarely the case as the ensemble members synchronize their phase with the observations due to the driving of the LETKF.
The second contributor to the model error is the lag inherent to the propagation into the interior of the domain for the autonomous model. The interior dynamics are lagged by at least , the time it takes information on the surfaces to propagate to the interior for a transverse wave speed of . Thus, even when the surfaces correctly estimate the distribution of the excitation in the observations, the interior distribution is dominated by the dynamics from several assimilation steps ago. These discrepancies are systematic errors in the reconstruction that are largely invisible to the metric, as it relies on the surface observations, but should affect if the ensemble members differ in the interior, and likewise .
The third contributor is the lag associated with the autonomous model dynamics due to the lack of stimulus information. During quiescence, the next excitation wave is observed at time ; these observations are used in the LETKF and their influence appears in the analysis ensemble states at time . These analysis ensemble states form new initial conditions for the dynamical model, and the effect of this perturbation due to the initial observations appears in the next background ensemble, at , thereby introducing a lag between the background ensemble and the observations of no less than the assimilation interval time. If the threshold for excitation is not met for the analysis ensemble state, then the lag between the observed excitation stimulus and the resulting wave will be larger, which is typically the case as the initial observations of an excitation are, by definition, small, and likely sub-threshold.
Finally, we must note several features of the observations and reconstruction that will become relevant in later results. The observations smoothly interpolate from quiescent ( ) to fully excited ( ) across the wavefront, cf. Fig. 4(a). Such blunted fronts represent a configuration not achievable with the model at hand—this model exhibits a threshold, such that the wavefront is sharply defined; i.e., is very likely to be close to or close to , and the probability that is near along the wavefront is exceptionally small. The observations suggest that, at this particular time, has a significant probability of being close to . This tension between the spatially smooth data and the sharp features of the model forms a significant impediment to reliable reconstruction.
Additionally, the dynamics of the model with the chosen parameter set—despite being manually tuned for this dataset—tends to overshoot the action potential amplitude (APA) of the observations, cf. Figs. 4(a)–4(c). This behavior is caused, first, by the analysis step of the LETKF over-correcting the ensemble states. The effect then persists due to the relatively short model integration interval ( ms), which is not sufficient for the state to relax onto the slow manifold. This effect may also be observed in the free-run experiment due to the relatively high frequency of the stimulus current (not shown). Notably, the effect of the high-frequency pacing stimulus current and the overshoot of the analysis step are not additive.
3. Explicit stimulus current modeling
This experiment considers the effect of including a non-autonomous, spatially localized, explicit stimulation current in the dynamics of the model that is tuned both spatially and temporally to qualitatively match the stimulation observed in the experimental dataset, and the potential for catastrophic synchronization of the ensemble in the presence of this explicit modeling choice. The stimulus current used is the same as that used in the free-run results (Sec. III A 1). We refer to this experiment as “stimulus.”
The results for the model are shown in Fig. 5, where the reconstruction is largely insensitive to the inclusion of the stimulus current. Several small changes appear in the compared to the autonomous results. The reconstruction error exhibits smaller peaks during the excitation phase, where the stimulus current triggers the excitation wave ahead of the lag associated with the observation-LETKF-model path in the autonomous experiment. After , drift in the timing of the stimuli leads to an early stimulus, which is corrected by the LETKF on the surfaces, effectively constraining the timing error. These early stimuli appear as small ( ) increases in the from the baseline outside of the dominant contributions from the excitation wave in Fig. 5(a)—the short-lived effect on the surface error is also an indication that the reconstruction procedure is robust with respect to errors in the specification of the stimulus period, a prime concern due to the timing-related contributions covered in the autonomous reconstruction.
In Fig. 6, we show the (a) for the autonomous, stimulus, and free-run experiments over time (left), and the marginal density plots of the surface errors (right) for reconstruction using Fenton–Karma and the Barone et al. parameter set. The improvement of the over the free-run experiment is significant, but the surface errors for the autonomous and stimulus experiments are comparable, both overall and over time. The most significant correction of the for the stimulus results over the autonomous model is a shorter initial transient as the first excitation ( ms) reaches a lower amplitude error on the surfaces, and thus, the ensemble synchronizes to the true stimulus rhythm more quickly. The makes it clear that the inclusion of the LETKF effectively lowers the expectation value of the surface error overall. For both the autonomous and stimulus experiments, the surface error is nearly identical—it appears that the inclusion of stimulus information into the model dynamics has a minimal impact on the surface reconstruction error over long times, save for the minor advantages noted above. The depth-averaged for the free-run experiment is larger than in either the autonomous experiment or stimulus, reflecting the low-confidence of the reconstruction in the absence of the LETKF. Indeed, the depth-averaged is nearly identical for the autonomous and stimulus results, with a subtle bias in the latter toward larger variation over time, cf. Fig. 6(b).
In addition to the reconstruction error of the field, it is worth considering the error in physiologically relevant quantities, which are derived from the state, e.g., those which inherit from the structure of the action potential, including the action potential duration (APD) and amplitude (APA). We report the temporal trace of the autonomous reconstruction for a randomly selected position in the observations in Fig. 7(a). After an initial transient, which lasts less than two BCL ( ms), the reconstruction at this point is excellent, with minor errors appearing in the apex of the action potential and, thus, contributing to a poor estimate of the true APA, cf. Fig. 7(b). The distribution of APD between the observation and reconstruction traces matches exceptionally well as expected because the predominant error in the reconstruction appears far from the threshold value ( ). Similar calculations for all LETKF-driven reconstructions are included in the supplementary material, cf. Fig. S1.
4. Stochastic current model effects
In Fig. 8(a), we show the surface errors of the reconstruction for the Fenton–Karma model with Barone et al. parameter set, with the current-based SDE and SMP- stochastic effects. For these relatively low-amplitude stochastic effects, we find very minor changes to the surface reconstruction error—an effective smoothing of the peaks of corresponding to the excitation wavefront and waveback, which is expected from a uniformly stochastic effect. Analogously with the stimulus result, the transient in the is slightly worsened by the inclusion of the stochastic effects; the excitation near ms exhibits a larger peak error.
In the previous work, we showed that the stochastic additions were capable of permitting the ensemble to better approximate the local degrees of freedom of the true dynamics by effectively inflating the spread for a given ensemble size. In this experiment, we find no such effect. We find that the addition of the stochastic current accelerates the depth-averaged , leading to faster average spread growth over the initial phase, before switching to a conservative effect over the long-term reconstruction, suppressing short-lived variations in the spread. Indeed, what this behavior indicates is that the addition of stochastic processes to the model may enhance synchronization of the ensemble in some scenarios, lessening the ensemble spread or suppress total decoherence for the ensemble.
Comparing the surface errors corresponding to the autonomous (Fig. 3), stimulus (Fig. 5), and stochastic (Fig. 8) model experiments, we find that the inclusion of the stimulus current or stochastic effects has a small effect on the overall reconstruction error. We may interpret this as, at best, a marginal improvement in the accuracy of the surface reconstruction with the inclusion of the stochastic or stimulus current into our dynamical model. Alternatively, we may interpret the autonomous result as achieving reconstruction errors lower than expected and with confidence comparable to these more informed model approaches. Likewise, due to the slow drift of the timing of the stimulus current in the observations and model, we may be confident in the robustness of the reconstruction with respect to model errors related to stimulus timing.
B. Influence of observation perturbations on reconstruction accuracy
If modifications to the model have only subtle effects on the reconstruction of the state dynamics, then perhaps the tuning of experimental observations will have more significant effect. In these experiments, we perturb the observations from the experimental data to better model the uncertainty associated with the physical processes involved in the recordings or to buttress the highly-sparse observations with some intermediate information based on the apparent temporal structure of the data. Likewise, this investigation presents an opportunity to ensure that our reconstruction is robust for the type of data available to the cardiac researcher.
1. Addition of synthetic observations
In this experiment, we explore the generation of mid-depth synthetic observations, in addition to those on the surfaces, for use in the assimilation process. The role of these synthesized observations is to gently constrain the ensemble spread in the interior of the tissue, which we have previously identified as a predominant source of error in the reconstruction of the state in this system.22 The synthesized interior observations are interpolated between the observation values from their projections onto the epicardial and endocardial surfaces. The synthetic observations are appended to the real observations, the former having fixed point-wise uncertainty estimates of and the latter having fixed point-wise uncertainty estimates of . Different spatial distributions of mid-depth observations can have a significant effect on the robustness and accuracy of the reconstruction. A full arrangement of observations that sample the -grid in the same positions as on the surfaces led to immediate ensemble collapse due to strong contraction of the analysis step (not shown). We use a relatively sparse arrangement where observations were distributed uniformly in an grid, which did not collapse the ensemble.
We add these synthesized observations due to the spatiotemporal pattern of the underlying excitations, where the wave on the surfaces passes nearly uniformly over and in time. We thus treat the observations on each surface as strongly coupled and thus amenable to interpolation through the depth, i.e., between the surface sets. We report the impact the additional observations (a increase in the total number of observations) has on the reconstruction fidelity.
The results of the assimilation with the synthetic mid-depth observations are depicted in Fig. 9. Immediately, we note that while is not significantly affected, now peaks significantly higher than any other assimilation experiment, cf. Fig. 9(a). This finding immediately indicates a significant deviation from the observations but does not reveal whether this discrepancy is due to the interior, synthesized observations or the “true” surface observations. The contribution is distinguished by , which likewise peaks significantly higher than any other assimilation experiment, cf. Fig. 9(b), and is computed over the surface observations only for consistency with the other reconstruction experiments. Notably, the may be decomposed into contributions from the surfaces and the synthetic observations in the interior; doing so would provide a direct estimate of the error of the reconstruction in the interior provided our interpolation Ansatz is correct and reasonable. However, we have no rigorous way to assess the interpolation Ansatz and so cannot assert the correctness of the reconstruction in the interior using this method. So, we conclude that the inclusion of the interior synthetic observations interpolated from the surface observations leads to significantly worse reconstructions on the surfaces, despite the high uncertainty of the synthetic observations.
The resolution of this puzzle may appear from a consideration of the ensemble spread. While additional mid-depth observations are extremely good at restricting the uncertainty of the reconstruction in the mid-depth of the domain, cf. Fig. 9(c), as information from the surface observations can affect the interior layers of the solution, this effect is not always desirable. By constraining the interior, we limit the spread of states used in the reconstruction of the surfaces, as well. This now-restricted ensemble of states may not adequately cover the physical instabilities present in the observed surface dynamics, limiting the accuracy of the reconstruction for observations for which we are confident (those on the surface) for the sake of observations for which we are not (those in the interior). Tuning the uncertainty and distribution of the interior observations compared to the surface observations to critically constrain the ensemble thus presents an additional window for optimization, although it is beyond the scope of this work.
2. Observation uncertainty modeling for fronts
In Fig. 10, we report the effects of the wavefront uncertainty experiment. We find that the additional uncertainty along the wavefronts and wavebacks in the observation data leads to a small, but consistent, increase in the near the end of the wavefront and likewise increases the surface error during the end of the waveback, introducing a set of smaller peaks in between the usual set associated with the end of the wavefront. As we are encouraging deviations in the ensemble members near the beginning and end of the excitation wave, this finding is not unexpected—exactly in the transition between excited and unexcited, the spread is increased slightly, and it makes the most significant relative contribution to the surface error where and when the error is already low. Additionally, as the observations are synchronizing forces for the ensemble subject to the inverse observation covariance weighting, , a higher uncertainty for the observations should lead to less synchronization overall and higher ensemble spread. We observe faster initial growth of the compared to the autonomous experiment, but the long-term maximum is not significantly altered.
In Fig. 11, we show the and for the autonomous, wavefront uncertainty, and synthesized observations experiments. Both the wavefront uncertainty and synthesized observations experiments lead to larger surface reconstruction errors ( ) overall compared to the autonomous experiment, though their relative contribution over time is highly non-uniform. While the autonomous experiment reconstruction manages to match the APD and DI of the observation data, the synthesized observation experiment reconstruction undergoes large deviations from accurate reconstructions, frequently lasting longer than the APD of the observations. Likewise, while the wavefront uncertainty experiment explicitly makes a trade-off between confidence about the precise position of the wavefronts for the robustness of higher ensemble spreads, this robustness never materializes in practice; the autonomous experiment spread is comparable to that from the wavefront uncertainty experiment.
IV. DISCUSSION
In all results, the unobserved interior layers that are not driven by the LETKF analysis ensemble update form the dominant contribution to the reconstruction error, as expected from previous results using surface observations generated from model runs.22 However, previous experiments focused on dynamics that are autonomous and (on the large scale) structured by topological features, i.e., the dynamics of scroll waves are organized around their filaments; in the present work, we have focused on experimental results that are necessarily non-autonomous and driven by external stimuli, as these conditions are endemic to experimental investigations of cardiac tissue excitation. In the present scenario, all the dynamics are driven by the observations and thus the LETKF update of the analysis ensemble, save the free-run experiment which corresponds to the absence of any observations, without such topological constraints on the dynamics.
A. Model error interactions with the assimilation
Several considerations have an outsized impact on the accuracy and stability of the reconstruction for this dataset. The first is the model error due to, primarily, misparameterization. For example, when the refractory period is sufficiently long, then subsequent excitations generated by the LETKF develop into a conduction block due to the refractory behavior of the model. The blocked wavefronts then prevent accurate reconstruction in the next assimilation step. Additionally, the propagation speed of the wave is important to match the model parameterization; slow waves rely on the assimilation scheme to excite the tissue ahead of the wavefront to match the observations, while fast waves rely on the assimilation scheme to suppress the propagation of waves in the model to match the observed wavefront. That the asymmetry between the ignition and quenching problems has significance for the reliability of the state reconstruction is surprising, at first glance. However, this effect mirrors our understanding of critical transitions for excitable media: the ignition problem relies only on the local threshold response of the excitable medium, while the quenching problem is sensitive to the particular pattern of the excitation and is thus a more nonlinear process.
The parameterization of the model can also affect the dynamical range of the ensemble states, such that the analysis may drive the fields beyond the range of the observations, i.e., “overshoot.” In our present problem, the action potentials of the Fenton–Karma model are only weakly bounded to the same domain as the observations, i.e., , whereas the assimilation step utilizes a strict bound for the analysis states . Due to the short model integration times ( ms) and the overshoot of the analysis step, the state cannot relax sufficiently quickly to account for the overshoot, resulting in anomalously large field values, which likewise must be accounted for in the next assimilation step. This model-assimilation feedback cycle presents further opportunities for assimilation filter investigations.
One possible way to improve the robustness of the reconstruction process is by extending the reconstruction to the model parameters in addition to the state vector. This approach is especially useful for model (parameter) identification in the absence of a known model (parameter set). We have opted to reserve this avenue of investigation for future work, both because a published model parameter set for this dataset and model exists25 and because our stochastic model inflation techniques have previously been shown to be adept at accounting for model errors (specifically those arising from mis-parameterizations).22 A thorough study of the identification of model parameters for cardiac electrical excitation models using data-assimilation techniques may provide useful advances in clinical applications.
B. Observation interactions with the model dynamics
Observation processing is also a more subtle art than it first appears. Optical mapping recordings effectively blur the position of the wavefront by recording fluorescence not only from the top-most layer of cells in the tissue, but the transferred fluorescing of excited layers of cells below the surface. This blurring manifests as, instead of a sharp wavefront, a leading edge that smoothly transitions from quiescent to excited across a width as large as cm. Numerical post-processing to improve the signal-to-noise ratio of the data is typically achieved with a spatiotemporal convolution, which exacerbates this smoothing of the sharp features of the wavefront into a distribution of marginally excited cells. However, as the experimental data we use during constant pacing reaches steady state, stacking was used,40 which increases the signal-to-noise ratio without further spatial smoothing. This distribution of marginally excited cells cannot be reconstructed by a threshold excitable model—we rely on the assimilation scheme to effectively interpret the steep wavefronts generated by the ensemble models for a given smooth observed wavefront. We have considered the blurring of the state observations by an observation operator which does not merely sample the field at particular positions but computes the local average of the field through a fixed-width box-filter convolution. We anticipated that the convolution length scale would compete with the length scale of the localization process and likewise the convolution filter would compete with the filtering step of the LETKF, and expected an overall reduction in informational content about the ensemble as the elements of the -dimensional are now substantially more correlated, harming the reconstruction by introducing an error into the ensemble observations which is essentially uncontrollable. We found that, in practice, the difference from the sampling based observation operator used in this work is minimal—cf. Fig. S3 in the Supplement for further details and results.
Likewise, the estimation of observation uncertainty for cardiac experimental recordings is an important topic that we have treated only superficially in this work. As the optical-mapping recordings and post-processing involve a spatial averaging that essentially smooths the steep wavefronts endemic to real excitable models, we may heuristically assign a nonlinear uncertainty to the observation data based on the approximate value of the observation, as in Sec. III B 2. Because there are effectively three “slow” states observed in the data for the transmembrane potential—quiescent, marginally excited, and fully excited—and we suppose the middle state is an apparent figment due to the measurement scheme, we may estimate the probability that the observation of a cell in that state should remain in that same state by the next observation time, equivalently, assimilation interval, and interpret this proportionality factor for the uncertainty of the observed transmembrane potential. For the Fenton–Karma model, we may map these observed slow states based on the range of and model parameters, specifically representing the marginally excited state. The uncertainty of observations outside this range should be determined by the noise floor of the observations. As the true values of and are undefined but may be assumed to be close to those used in this work, the smooth extension to all observation values is needed; heuristics for nonlinear observation uncertainty depending on the state leads to a lesser weighting (higher uncertainty) for observations of the wavefront and waveback. Precisely constructing these estimates requires building a statistical model of the observations, which is beyond the scope of this work. Our initial experimentation based on heuristics that smoothly increase the uncertainty of observations in the marginally excited region suggest that this subtle modification of uncertainty can have significant impacts on the accuracy and robustness of state reconstructions.
Finally, while we have focused on a particular dataset for the reconstruction task, we have computed reconstruction with related data ( ms) and different model parameter sets. For a slower BCL the conclusion is the same: the propagation of excitation information from the observations to the ensemble state vectors is slow and requires consistent work from the LETKF to correct, which manifests as peaked errors in the surface reconstructions which match each new excitation. For alternative model parameters, the periodicity is still matched but the difference in wave shape results in significant increases in reconstruction error. These experiments suggest that our approach is robust to peculiarities of the dataset and model specification. Further details are available in the Supplement, see Figs. S4–S8.
C. Assimilation interactions with the observation uncertainty
The specification of the LETKF includes several numerical parameters that we found to influence the efficacy of the assimilation program. The first is a floating-point parameter named “gross error,” designated by , which controls whether an observation is sufficiently deviant from the background mean to be truncated, with the goal of maintaining a “light touch” for the assimilation and enhancing the stability of the scheme in the presence of large-deviations from the observations. For an observation-uncertainty pair and corresponding value of the background mean field , if , then the corresponding element of is overwritten by , i.e., the update of the analysis ensemble is invariant with respect to this observation. As the range of and the minimum of the uncertainty of the surface observations are fixed, when every observation is used in every assimilation step. However, we find that setting the parameter below this saturation value (e.g., ) can significantly affect the results, producing maximal surface reconstruction errors nearly lower than those in the saturated case ( ). For this reason, we set for all experiments in this work. In program logs, it is clear that fewer than of observations are ignored by assimilation step ( ms) for the autonomous experiment, which suggests that we are only filtering truly extreme values, and only in the initial iterations of the assimilation. In principle, controlling this parameter is akin to inflation – it is a measure of our relative confidence in the observations and the background ensemble members. It is not clear a priori what value this parameter should take, and because it affects the accuracy of the reconstruction, characterizing its effects should be a priority in future investigations.
Further, we have used throughout this work, which was estimated from the full-width at half-maximum of the distribution of the observation values. While we found some sensitivity to the precise value of the uncertainty (e.g., experiments with and produced reconstructions with higher overall), they are qualitatively similar to the presented results. A rigorous estimation of the observation uncertainty from the statistical properties of the dataset is beyond the scope of the present work but may reveal that the accuracy of the reconstruction may be improved through better estimation of the observation covariance matrix.
In this work, we have used “strongly coupled” DA (in analogy to the same term used in domain-decomposed models in weather forecasting) for the analysis update to the state variables for all experiments. We permit observations of the state to affect the analysis update of the and fields. In contrast, “weakly coupled” DA communicates information between the state variables only through the model evaluation. Strongly coupled DA is expected to be able to extract more information from the same observations than weakly coupled DA.41 Strongly coupled DA permits more significant corrections to the state for our relatively sparse observation pattern, at the risk of over-driving the corrections to and in ways that are atypical for cardiac excitations. Restricting the analysis update due to observations of the variable to the variable only reduces the efficacy of the assimilation by preserving more ensemble state information in the analysis, while permitting larger ensemble spreads due to the uncontrolled dynamics of and . In numerical experiments, we found that in some scenarios we could drive the gating variables to their rest state values during the assimilation, when in model simulations of comparable dynamics the gating variables never attain their rest values. Open questions include whether the relatively weak coupling of the cardiac model is sufficient to ameliorate some of the challenges to robustness strongly coupled DA is expected to expose, or whether a weakly coupled DA approach unacceptably degrades the accuracy of the reconstructions. A systematic analysis of strongly and weakly coupled DA in the cardiac context would clarify the effect this choice has on robustness and accuracy and should be a focus of future work.
Relatedly, when we found that the LETKF was over-driving the gating variables and leading to physically unlikely values, we bounded the gating variables between their explicitly known limit values, . This bounding prevents the assimilation from creating unreachable analysis states that have little to do with the true dynamics—a hazard for what is effectively an initial condition for the model. Additionally, we found that a similar effect could be found in , where some ensemble members would be driven to exceedingly large-valued and hyper-localized corrections of the state leading to anomalously large ensemble spreads (i.e., ), which should not be possible for typical values of . To suppress this growth, we bounded , which has the positive effect of bounding the ensemble spread. Further, while the limits on are sufficiently strong to bind the ensemble spread, they are not strong enough to prevent overshoot such that . However, it is presently unclear whether this is an innocuous addendum to the assimilation procedure to account for numerical instabilities in the model-LETKF interaction or symptomatic of a limitation of our approach. Nor are the constraints on the initial conditions to the model sufficiently well-defined mathematically to say these limits need never be adjusted for this or another model.
The observation localization is controlled by two parameters for space and one for time; the latter is set so that only the current time affects the assimilation as this is compatible with the clinical application and is computationally efficient. The spatial localization scales are isotropic, and we have not investigated the choice of different localization scales or anisotropies. In principle, given the density and low noise of surface observations, it may be computationally advantageous for their influence to extend through the depth of the tissue while retracting their overlap on the surface. However, this perspective is difficult to justify physically—indeed, it assumes a surface-dominant dynamics of the interior that is, at an abstract level, the phenomenological question we are considering in this work. Physically, the influence of each observation may extend to the spatial region that may be excited by the propagation of excitation information in the time between observations—that is, for a quiescent state—making it wholly anisotropic, and, in fact, shortening the extent of influence on the thickness of the tissue compared to the effect along the surfaces, as , by design. The corresponding fiber-informed localization scales are then related by the inverse conductivity anisotropy ratio, , and vary with spatial position. Likewise, this scaling informs the observation distribution—while we are limited to surface recordings of electrofluoresence, a consideration of localization from the properties unique to cardiac excitation may benefit the cardiac researcher.
D. Accuracy measurement in the absence of “truth” states
Finally, we have focused on the surface error of the state reconstructions and appealed to CRPS for a measurement of ensemble consistency in the absence of interior observations or a “truth” state. The choice of this measure is motivated by its convergence properties with respect to ensemble size and its simple interpretation rather than its particular relevance to cardiac state reconstruction. Alternative error measures have been used in other works, e.g., threshold-based error,21 which makes reference to an unknown truth state, a prescribed threshold value, and knowledge of the dynamical properties of the model. Constructing a fair (in the sense of CRPS) measure of unknown state feature error requires the construction of a statistical model of the dynamics of the state itself, against which we might compare the expectations of the reconstructed state. This effort would also aid in physically motivated estimation of observation uncertainties, as mentioned above.
The creation of a statistical model of the dynamics for the state reconstruction problem is precisely the approach taken by researchers in the machine-learning literature, which has shown promising results. Ref. 42 investigated the use of a long short-term memory recurrent neural network, autoencoder, and diffusion artificial neural network (ANN) models for the 3D reconstruction problem for a simple excitable model, while varying the history available to the model and the layer depth of the reconstruction. The central results of this approach are encouraging, especially for shallow depths (close to the observations) and long histories. Notably, considering the parsimony of their observations ( , for ) compared to this work ( ), the authors found similar reconstruction limitations to our approach for a remarkably different dynamical model and geometry. Ref. 43 investigated the reconstruction problem for the Aliev-Panfilov model using a U-net architecture ANN with similar considerations to the informational content of the observations as our work. In particular, the authors found that simultaneous observations of both the top and bottom layers of the excitable medium produced more reliable reconstructions than single-surface observations, but that “projection” observations—those integrating over the depth of the tissue—worked even better. Their results using the U-net ANN architecture in the “laminar” dynamics regime (an isolated scroll wave with size comparable to the depth of the tissue) are comparable to previous in silico results for a similar dynamical pattern produced by a different cardiac excitation model in conjunction with LETKF and stochastic inflation.22 The authors also found that the encoding of depth information into the observations was essential for the success of their methods, which limits its application to ventricular fibrillation.43
The combination of statistical model (machine-learning) and dynamical model (data assimilation) approaches to reconstruction is an active avenue of research. One of the central pragmatic limitations of ensemble data assimilation approaches is that storage requirements grow as while computation (for LETKF32) grows as ; replacing some of the ensemble members with a pre-trained machine learning model could reduce both costs. Likewise, machine-learning models require training and do not offer an estimate of their prediction uncertainty; coordination within a data assimilation framework could alleviate the latter issue with existing infrastructure, while hot-swapping dynamical models with machine-learning models could provide relief for the former. In particular, we have found that it may take up to ms for the ensemble to settle into a good approximation of the true uncertainty of this system; machine learning may provide an accelerated convergence path through better estimates for the initial state covariance, or ensemble state initial conditions based on extensive training data.
V. CONCLUSION
In this work, we have performed several assimilation experiments using physical data with variations to the model and observations. We have found that the assimilation of experimental observations for cardiac reconstruction brings unique challenges compared to the assimilation of model-produced observations. The leading contributors to difficulty for state reconstruction are the type of dynamics under study, i.e., non-autonomous stimulated dynamics, model error due to mis-parameterization, the presence of synchronizing inputs (i.e., the stimulus current), and the estimation of uncertainties for experimental observations.
When the model generates fundamentally different types of dynamics compared to that described by the experimental data, we rely on significant corrections from the LETKF. These large corrections, in turn, increase the likelihood of generating physically unrealizable states. In the cardiac context, unphysical behavior may manifest as an excitation that spreads significantly faster than the model permits for our parameter setting—thus, the reproduction of the wave speed by the model is an important feature for the reproduction of the state dynamics.
An important issue, only briefly touched upon in this paper, is the tuning of model parameters to account for data features. Particularly relevant for this dataset is the ability to reproduce alternans—both in the duration (APD) and amplitude (APA) of excitations. While APD alternans are readily reproducible with the Fenton–Karma model, APA alternans are not a common feature in this model. Modifications to the dynamics or model parameters to reproduce APA alternans in the Fenton–Karma model may be designed but are outside the scope of the present work.
Model error forms a dominant contribution to the reconstruction error on the surfaces and has a significant effect on the interior spread of the ensemble and the confidence in the reconstruction of the interior. The may give an overly optimistic estimate of the confidence in the reconstruction accuracy in the interior due to synchronization from the periodicity of the observations over time. Further methods of assessing reconstruction performance without recourse to observations are necessary.
Our data assimilation scheme, while effective for reconstructing the state from model-derived observations with the same distribution pattern in use in the present study, struggles with the experimental data in use in this work. We suspect that this difficulty reflects a fundamental difference between our previous three-dimensional reconstruction efforts and the current dataset, which is the non-autonomous nature of the observed dynamics. Data assimilation experiments with experimental data showcasing autonomous dynamics—i.e., a spiral or scroll wave—would present an interesting and informative extension of this study.
In conclusion, we have found that we can reliably reconstruct the surface dynamics of a driven excitable state from experimental observations on the surfaces but that reliable reconstruction of the interior requires further efforts to understand our dataset and how it interacts with the dynamics of excitable models. We have found that while the inclusion of explicit modeling of the stimulation current or stochastic effects may yield subtle improvements to the reconstruction, the coupling of the LETKF, observations, and the model is sufficiently close that it does not correct the leading error caused in the reconstruction of this dataset. On the surfaces, these errors are dominated by poor model fit to the excitation wavefronts and wavebacks. In the interior, it is the lack of propagation of information from the surfaces, such that the interior becomes a fount of uncertain excitation. Likewise, we have determined that incorporating additional modeling insight into the features of the observations, whether through the identification of a coherent wave pattern and using it to constrain the uncertainty of the interior dynamics or through modeling of the uncertainty associated with the observations, is subtle and produces new challenges which, at this initial stage, do not appear to improve on the low-information approach.
SUPPLEMENTARY MATERIAL
We have included two video files in the supplementary material. The first, auto_ensmeans.mp4, displays the dynamics of u ¯ b ( t , x , y , z ) and u ¯ a ( t , x , y , z ) for z / d = 0 , 0.2 , 0.4 , 0.6 , 0.8 , 1.0 and emphasizes the difference due to the assimilation, u ¯ a − u ¯ b, and the observations u o for the autonomous model experiment. The second, auto_b.mp4, displays the dynamics of u ¯ b ( t , x , y , z ), v ¯ b ( t , x , y , z ), and w ¯ b ( t , x , y , z ) over time for z / d = 0 , 0.2 , 0.4 , 0.6 , 0.8 , 1.0 for the autonomous model experiment. We have also included a short supplement that includes results for a number of experiments referenced throughout the text.
ACKNOWLEDGMENTS
We thank Alessio Gizzi for providing access to the data used for this study. M.J.H. and E.M.C. were supported by the NSF under Grant Nos. CMMI-2011280 and CMMI-1762803. F.H.F. was supported by the NSF under Grant No. CMMI-1762553. C.D.M., F.H.F., and E.M.C. were supported by the NIH under Grant No. 1R01HL143450-01. This research was supported, in part, through research infrastructure resources and services provided by the Partnership for an Advanced Computing Environment (PACE) at the Georgia Institute of Technology, Atlanta, Georgia, USA.44 This work has likewise made use of the Hamilton HPC Service of Durham University. All figures in this report were generated using the Makie.jl45 software package.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
C. D. Marcotte: Conceptualization (lead); Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (equal); Project administration (supporting); Resources (equal); Software (equal); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). M. J. Hoffman: Conceptualization (equal); Funding acquisition (equal); Methodology (equal); Project administration (supporting); Software (equal); Supervision (supporting); Validation (supporting); Writing – review & editing (supporting). F. H. Fenton: Conceptualization (supporting); Funding acquisition (lead); Investigation (supporting); Methodology (supporting); Software (supporting); Supervision (supporting); Writing – review & editing (supporting). E. M. Cherry: Conceptualization (lead); Funding acquisition (lead); Methodology (equal); Project administration (lead); Resources (equal); Software (equal); Supervision (lead); Validation (supporting); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.