On learning latent dynamics of the AUG plasma state

,


I. INTRODUCTION
We investigate the use of state representation learning to model the time evolution of plasmas at ASDEX-Upgrade (AUG).As reviewed in 1 , state representation learning (SRL) focuses on learning low dimensional features of an environment that evolve in time and are influenced by actions.An SRL model posits a system's state at a given time, s t , with observations, o t , which are noisy measurements of the state.The state evolves under the influence of actions, leading to future states s t+1 , which can again be measured, o t+1 .In this work, we consider AUG to be the environment in which actions, a t ∈ A, are made at a time step t, where A is the action space.At AUG, actions are 'machine control parameters', such as plasma current, magnetic field strength, gas puffing rate, etc.The change in machine parameters induces a change in the plasma state, s t to s t+1 .Full information of the true plasma state is not accessible, but diagnostic systems, such as Thomson scattering and reflectometry, provide partial and noisy observations, o t ∈ O, of the plasma state.
The goal of this work is to investigate methods to learn a useful representation of the AUG plasma state, s t ∈ S, with which we can then learn a forward model, p(s t+1 |s t , a t ), to predict the evolution and dynamics of the plasma state.In this work, a useful state representation is defined as one which represents the high dimensional observations in a state that conforms to the actual degrees of freedom of the system, i.e., a state that retains the information content of the observations.a) Electronic mail: adam.kit@helsinki.fiCompressed or lower dimensional state representations are desirable for control 2,3 , predictive inference 4,5 , and interpretation 6 , among others.Ideally, such representations contain the essential aspects of the system.In practice, it is hard to enforce this, thus, the overarching question we seek to address is: What constitutes the objective function that learns a useful state?
In this work, a variational autoencoder 7 (VAE) is used to learn a state representation of electron density and temperature profile measurements of AUG plasmas.The state representation and machine parameters are used to train a forward model to predict the dynamical evolution of the state.The end result is a model that can predict dynamical evolution of the electron density and temperature profiles directly from a sequence of machine parameters (Fig. 1).

II. DATASET
The dataset used in this analysis consists of 1000 high confinement mode (H-mode) discharges that are nondisruptive, deuterium fuelled, and without impurities.For each pulse, observations are outer-mid-plane electron profiles of density, n e and temperature, T e , and actions are machine control parameters.
The electron profiles are obtained from the IDA 8 at AUG, which applies Bayesian probability theory to fit a spline to core-edge measurements originating from lithium beams, electron cyclotron emissions, Thomson scattering and interferometry.
The following machine control parameters were selected: total plasma current, I P , safety factor magnitude at 95% flux surface , q 95 , total deuterium injection rate, D TOT , plasma major radius, R, plasma elongation κ, upper triangularity δ u , lower triangularity, δ l , aspect ratio, A = R/a (where R and a are the major and minor radii of the plasma), and total heating power normalized by the Martin LH-threshold scaling 9 , P TOT /P LH .The motivation for using the Martin scaling instead of just P TOT is that P TOT /P LH is a normalized parameter with respect to plasma scenarios and can be applied for any device size that was used in the scaling.The chosen machine parameters are considered to be 'controllable' even though not all are strictly knobs on the tokamak, i.e., I P is not the current through the central solenoid but is a quantity that is achieved through attenuation of controllable parameters (the current through the central solenoid).The same holds for the parameters related to the plasma shape even though they are all reconstructed values of the plasma state (δ l , δ u , A, R and κ).The machine parameters are linearly interpolated in time with respect to the IDA measurement frequency (1kHz) in order to homogenize the sampling frequencies of both actions and observations.Then, the combined set of observations and actions are time-wise downsampled, which transformed the time step frequency from 1kHz to 200Hz, i.e., observations and actions were selected every 5ms as opposed to the 1ms true sampling interval.This is done in part because the observations and actions chosen in this work tend to have low variability within 5ms intervals.
The data is split into training, validating and test-ing subsets, consisting of 853 (∼5500 real-time seconds), 137, 137 discharges respectively.Additionally, discharges coming from the same shot request are binned into the same subset, which helps ensure that the training set does not include similar discharges as the validating and testing sets.The observations and actions are z-score normalized via the mean and standard deviation of the training set.

III. MODEL
The model follows 'World Modeling' as first proposed in 10 ; here an observational model is used to compress measurements at a given time to a latent distribution, s i , and a forward model to evolve this distribution into the future, s j where j > i.Following more recent advances in World Modeling 2,11 , we additionally train the observational and forward models in one computational graph.Finally, a physics prior is introduced to guide the representation to be more physically informative.
The goal of the observational model is to learn a function that can reconstruct observations from a given state, i.e., learn ϕ(θ ϕ ) : o t → s t and π(θ π ) : s t → ôt , where ϕ has parameters θ ϕ .We assume that these relations are not deterministic therefore we argue to treat these function as probability distributions.To do so, we employ a VAE, a probabilistic generative model, consisting of an encoder (ϕ) and decoder (π) distribution.The encoder and decoder distribution are parameterized by neural networks.The distributions are learned by minimizing the reconstruction error (L 2 norm) between observations, o t , and their reconstructions, ôt , in combination with a regularizing term (Kullbeck-Leibler divergence) on a prior belief of the encoder distribution, ϕ θ ϕ (s t | o t ), resulting in our implementation of the VAE objective function: where p(s t ) = N (0, 1) is our prior belief about the state distribution, and the expectation E ϕ,π is estimated using the re-parameterization trick 7 .The prior belief about the state distribution is strictly a design choice to allow for an unconstrained model; future work would include identifying physics-based distribution as priors.The effect of different norms for the reconstruction loss was not explored in this work due to the efficiency of the L 2 norm.The architecture of the observational model is given in Table I.
The goal of the forward model is to predict a future state s t+1 from the previous state s t and machine control parameters a t , i.e., to learn the mapping f (θ f ) : s t , a t → ŝt+1 .Since the state s t is a distribution, the output of the forward model, ŝt+1 , is the parameters of a probability distribution, also parameterized by a neural network, from which ŝt+1 is sampled.To match s t+1 with ŝt+1 the following objective function is used: Similar to the observational model, the forward model outputs the parameters of a distribution, therefore the last layers parameterize the mean and standard deviation.

Model Component Layer(s) Activation
State to State Dense(8+9, 20) Dense (20, 8), Dense (20, 8)   None None where once again, the Kullbeck-Liebler divergence is used.The architecture of the forward model is given in Table II.
Together, a prediction of a future state, s t+1 , is made by first encoding o t to s t , then transitioning from s t to s t+1 (Fig 2).In this fashion, the forward model and observational model are trained simultaneously by minimizing the following objective: It is worth noting here that by allowing gradients to flow from L f back through to the encoder of the observational model, the learned state representation is expected to retain properties that facilitate state dynamics prediction by the forward model.
Additionally, two forms of regularization are added to the model: i) a penalty on violation of static pressure conservation in reconstruction ∝ ||n t T t − nt Tt || 1 and ii) the pushforward trick from 12 , where the forward model predicts ŝt+i , with i > 1, using the previous forward model prediction of s t+i−1 .Ideally, the pressure penalty encourages the observational model to encode physically consistent electron density and temperature reconstructions and was first explored in 4 .The pressure penalty is very similar to the reconstruction error, however we believe the pressure penalty helps to regularize the pre- dictions of the density and temperature with respect to fluctuations around a pressure value.For example, if the reconstruction error is 0, then the pressure error is also 0. Yet, if the reconstruction error is non-zero, then the pressure error could be ≥ 0, i.e., the fluctuations in temperature and density may even out to yield zero pressure error.If the model is wrong, we would rather the model learn to be wrong in this way.The pushforward trick 12 aims to stabilize auto-regressive models in long-range planning.During training, the number of time steps to rollout, i, is determined per mini-batch by sampling from a uniform distribution U [0, N ] where N is the number of epochs trained thus far.The loss is only calculated between the final rollout state, ŝt+i , and corresponding state s t+i .In other words, we cut the gradients in the unrolling stage.
Since the space of observations O is constrained to R + , the output of the observation model is clamped to output only positive real values during training.This is done by clamping the output of the observational decoder.

A. Observational Model
The quality of the observational model can be determined by comparing the observation with the reconstruction (Figure 3).For the test-set discharges, a mean absolute error (MAE) between reconstruction and observation has a mean of 0.28 ± 0.13 10 19 m −3 and 0.11 ± 0.07 (keV) for density and temperature, respectively.These lossy compression results are expected, as the state space is only 8-D while the observational space is 400 points for each time step.Increasing the state space dimensionality would likely yield lower reconstruction error.The average reconstruction error (MAE) of the test set discharges for ρ = 0.0, 0.5, 0.9, 1.0 is given in Table IV.
We find the reconstruction quality of the observational model to be sufficiently accurate to proceed with the forward model.

B. Forward Model
To determine the predictive quality of the forward model on AUG discharges, we first encode the observations at t = 0 to a state s 0 via the observational model, then the forward model rolls s 0 out to the final time step using true actions and its own predictions of ŝt>0 .Each ŝt is then decoded into profiles via the observational decoder.The MAE of all time steps over all discharges for The reconstruction error is similar to that of the average of the test set discharges, as the MAE of the density and temperature profiles averaged over this discharge are 0.32 ± 0.29 (10 19 m −3 ) and 0.12 ± 0.12 (keV), respectively.Mean and standard deviation of the errors are calculated over 100 sample reconstructions.Respective errors at ρ = 0.0, 0.5, 0.9, 1.0 are given in Table IV.
the forward model is 0.97 ± 0.55 10 19 m −3 and 0.35 ± 0.24 (keV) for density and temperature, respectively.The average reconstruction error (MAE) of the test set discharges for ρ = 0.0, 0.5, 0.9, 1.0 is given in Table V.The average percentage reconstruction error (MAPE) is given in Table VI.Both the MAE and MAPE are reported due to large variations radially of the magnitude of density and temperature, i.e., at ρ > 1.0 density and temperature are relatively low compared to ρ = 0.0.The mean accumulation of error does not rapidly increase over time for test-set discharges (Fig. 4).
The forward model is able to capture the profile evolution for various discharges, for example, a plasma scenario with feedback density control (Fig. 5), as well as a power step-wise ramp up (Fig. 6).

C. Forward Model with Auxiliary Regressor
Expanding on previous work 4 , we further regularize the state representation by learning a mapping r(θ r ) : s t → P TOT /P LH , τ E , where P TOT /P LH is the aforementioned normalized power action, and τ E the global confinement time.Inspired by 6 , we split s t into two subspaces s c,t and s /c,t , and apply the mapping only on s c,t .The dimensionality of s c,t is 2, one dimension for each regressed variable, and the dimensionality of s /c,t is kept to 6 so that the total dimensionality of s t is preserved from the previous experiment.Then, the mapping r(θ r ) is made to be linear with respect to the regressed variables P TOT /P LH and τ E , i.e., a diagonal matrix that maps one dimension of s c,t to P TOT /P LH and the other remaining dimension to τ E .The original loss function L then gains the following additional L1 loss term: where a c,t and âc,t are the true regressed variables and their reconstructions, respectively.With the additional regressor, it was seen that the model can infer the confinement time and power variable via an observed a state s t .The observed state can be encoded by either the observational model or predicted as before using the forward model (Fig. 7).
While the reconstructed value of τ E tends to be higher than the true value, the reconstructed P TOT /P LH values are quite close to true observations.The elevated τ E predictions might be caused by biases originating from the beginning and end of the plasma discharges and will be investigated in futures studies.The main message in this proof-of-principle work is to demonstrate the attachment of semantically meaningful information from the plasma state to the trained state representation with the auxiliary regression modules.
Since one dimension of s c,t is linear with respect to P TOT /P LH , we receive a simplified H-mode classifier without additional training.Assuming that P TOT /P LH is sufficiently accurate in quantifying the presence of Hmode, i.e, a plasma is in H-mode when P TOT /P LH ≥ 1 and sufficient predictive quality of the auxiliary regres- sor, then via the linear mapping r, there exists an equivalent threshold, s c,t > H thresh within the relevant power dimension of s c,t , (Fig. 8).
However, the additional objective comes at cost, as we observed small penalties on the reconstructive quality in the forward and observational models (Fig. 9).This is likely due to the two dimensions no longer free to compress information only pertaining to profile reconstruction information.It is likely that increasing the size of the state would resolve this.

D. Model Limitations
The forward model is limited by the type and distribution of machine parameters that is provided.For example, as only the total heating power is provided, any discharges where input power mixtures is varied midshot are subject to relatively large prediction errors (Fig. 10).Another example is strong tungsten accumulation in the plasma, such as observed in some of the actively cooled divertor experiments (Fig. 11).Since the actions selected in this work do not show the corresponding variations seen on these discharges, the forward model will mispredict the resulting profiles.

V. DISCUSSION
In this work, we have demonstrated the utility of state representation learning towards learning the plasma state at ASDEX-Upgrade.Our proposed model is able to predict the electron density and temperature profiles from machine parameters only.Additionally, we demonstrate the functionality of learning a state representation by incorporating a simplified H-mode classifier into the model FIG. 5. AUG #34828 comparison of true (top) and forward model predicted (middle) electron density and temperature profiles are plotted along with time traces of the radial values at ρ = 0.0, 0.5 and 0.9 (bottom).All rows in the left column are associated with density, the right with temperature.The solid/dotted grid lines in the top/middle plot correspond to the solid/dotted traces in the bottom plot.The initial state s0 is sampled from the encoder of the observational model given the profiles at t0.The initial state is then propagated in time alongside actions and previous state prediction via the forward model.The density prediction is worse at the beginning of the pulse, likely due to the fluctuation feedback of the gas flow rate, but eventually stabilizes to the true value.The resulting reconstructions of the predicted states from the forward model demonstrate the capability to handle sufficiently complex plasma scenarios.Respective errors of the density and temperature at ρ = 0.0, 0.5, 0.9, 1.0 are given in Table while retaining the ability to predict the time evolution of plasma profiles.
The forward model developed in this work has a very limited capacity (Table II).It might be useful to improve the forward model's capacity, e.g., via a recurrent neural network as demonstrated in 2 .An alternative approach was demonstrated in 16,17 , where a deep learning variation on Kalman filters is learned by sampling the transition between states with a VAE.Also, our approach to using a linear transform for time-stepping latent representations has some commonalities with the work in 18 , where Koopman operator theory is used to guide auto-encoders into learning Koopman eigenfunctions from data, i.e., the latent space has globally linear dynamics.A key difference being that we predict the mean and standard deviation of a distribution, which we sample at each time step.Here, perhaps, there is a connection to latent neural stochastic differential equation models 19 (NSDE), where our autoregressive formulation can be considered a crude discretization of such an NSDE.
The cumulative error plot (Fig. 4) is in some sense troubling.We expected to see a step wise accumulation of error, as with traditional forward model predictors.In-FIG.6. AUG #36022 comparison of true (top) and forward model predicted (middle) electron density and temperature profiles are plotted along with time traces of the radial values at ρ = 0.0, 0.5 and 0.9 (bottom).All rows in the left column are associated with density, the right with temperature.The solid/dotted grid lines in the top/middle plot correspond to the solid/dotted traces in the bottom plot.The initial state s0 is sampled from the encoder of the observational model given the profiles at t0.The initial state is and then propagated in time alongside actions and previous state prediction via the forward model.In this discharge, the total power input is increased step-wise, leading to a step-wise increase in core electron temperature, which match reconstructions of the predicted state.Respective errors of the density and temperature at ρ = 0.0, 0.5, 0.9, 1.0 are given in Table terestingly, it appears that the observational model encodes information about the machine parameters, even though this information is only propagated through the forward model.As a result, when the plasma reaches flat top, the forward model likely pulls the state prediction to a previously seen steady state representation.We believe this has to do with verifiability 20 , and appears to be both a feature and a bug.An open question is then to what extent the latent dynamic model simply leans to propagate the state through a series of steady states rather than (ideally) predicting the temporal evolution of the plasma.Along this line, we hypothesise that one could learn a steady state model within a DIVA/CCVAElike framework 6,21 , as in 4 , treat all time slices as steady state, and ultimately forgo the forward model.Future studies will investigate steady-state and dynamical models within the context of verifiability.
We  Take, for example, at a given time, observations of the the wall and plasma facing components in comparison to observations of the core.If we observe tungsten in the edge via spectroscopic measurements 22 , this accumulation is not immediately seen in the core.Thus an open question is how to reconcile the differing time scales of differing order phenomena in tokamaks?
A limitation to our model is that it is data-driven and generative.Outside of constraining the output of the decoder to R + , we do not enforce the model to predict 'physically valid' plasmas.It is of interest, then, to look into how we may constrain the representation to be physically valid.
Future work would explore including MHD stability and instability information into the state.Such a model could provide time-of-flight information on whether a plasma crosses a stability threshold, and if so, possibly what instability may be triggered.Satoshi Hamaguchi for hosting the ICDDPS-4 conference, where this work was presented as a oral contribution.A.K. would like to thank Ms. Green for fruitful discussions and artistic inspiration throughout the progress of this work.
A.K. would like to thank Ivan Zaitsev and Kostantinos Papadakis for energetic conversations surrounding the development of this work.
The authors would also like to thank the reviewers for improving the content of the paper.Additionally, the authors would like to thank Jakub Tomczak for introducing the chilling concept of verifiability to us.

DATA AVAILABILITY STATEMENT
The experimental data used for training the deep learning models in this work is stored at the data storage facilities of ASDEX-Upgrade and the authors do not have permission to make this data publicly available.However, the training data preparation routines will be provided on request such that anyone with access to the data can regenerate the training dataset.The Python codes encompassing the deep learning algorithms are available in GitHub at https://github.com/DIGIfusion/FIG. 10.In AUG #36150, the power starts as mainly NBI and ECR driven, however at 4.5 seconds, NBI is rapidly cut off and supplemented with ICR.Since PTOT/PLH is the only available power variable to the forward model, it observes a steady stream of power, which does not induce major changes in the inferred plasma state.It is likely that including separate variables for each power parameter would increase the resilience of the model to similar discharges.Respective errors of the density and temperature at ρ = 0.0, 0.5, 0.9, 1.0 are given in Table V. Predictions obtained from the model without the auxiliary regressor.for the actions figures are the units, and their corresponding name is given within the plot.In AUG #36669, the device is configured to test actively cooled divertor plates, and accumulation of impurities lead to an increase in core radiated power 15 , dropping the core temperature.The actions supplied to the model do not sufficiently encode this information and the model does not predict the temperature decrease.It is likely that including additional actions that are more correlated with detached/attached plasmas would increase the resilience of the model, such as those used in 5 .Respective errors of the density and temperature at ρ = 0.0, 0.5, 0.9, 1.0 are given in Table V. Predictions obtained from the model without the auxiliary regressor.

FIG. 1 .
FIG.1.A representation of the plasma state at ASDEX-Upgrade.On the left, a 2D cross section of the plasma with various flux surfaces labeled by their flux surface coordinate ρ.The confined region of the plasma spans from the core (ρ = 0.0) to the separatrix (ρ = 1.0).Machine parameters related to the shape of the plasma are labeled; the upper/lower triangularity, δ u/l , the major and minor radius, R, a.On the right, observations of the electron density and temperature are visualized.The main plasma kinetic profiles are typically remapped to the outer-mid-plane (location corresponding to the colorbar in the left).Flux surfaces move during the course of the discharge, as does the magnitude of the electron profiles; we seek to model the dynamics of the kinetic profiles in this work.

FIG. 2 .
FIG. 2. Graphical representation of the full model.Electron profiles are encoded via the observational model to the state st.To predict st+1, the forward model takes st and actions at, here the plasma current IPand gas puff rate ΓD.The observational model can be used to decode the state to retrieve kinetic profiles.

FIG. 3 .
FIG. 3. Observational model's reconstruction of AUG discharge #36150.The left and right figures show the density and temperature profile evolutions, respectively.The top and bottom plots show the true profiles and model reconstruction, respectively.The x and y-axis on all figures are the same.The reconstruction error is similar to that of the average of the test set discharges, as the MAE of the density and temperature profiles averaged over this discharge are 0.32 ± 0.29 (10 19 m −3 ) and 0.12 ± 0.12 (keV), respectively.Mean and standard deviation of the errors are calculated over 100 sample reconstructions.Respective errors at ρ = 0.0, 0.5, 0.9, 1.0 are given in TableIV.

FIG. 4 .
FIG. 4. The test set forward model error (MAPE) as a function of time.The error per step is calculated as the average over the density and temperature profiles up to ρ ≤ 1.0 for that step.The reason for radial cutoff is the very low values of temperature and density at ρ > 1.0.The spikes at the beginning and end are likely due the discharge entering and exiting H-mode, where the density and temperature rapidly change and are therefore difficult to precisely match.

FIG. 7 .
FIG. 7. The predicted and true time traces of τE (top) and PTOT/PLH (bottom) from AUG #34814.The predictions of the observational model (Obs.) are obtained by encoding observations to st and applying the auxiliary mapping.The predictions of the forward model (Forw.)are obtained by encoding the first observation to an initial state, i.e., o0 → s0, then rolling out with the forward model until the last action.

FIG. 8 .
FIG. 8. Top: The time trace of state dimension sc 1 ,t is encoded by the forward model using the actions of AUG # 34814.The horizontal line (colorbar vertical) marks the value of sc 1 ,t which corresponds to an inferred PTOT/PLH = 1.The coloring is found by applying the auxiliary regressor to the range of sc 1 ,t ∈ [−10, 0].Due to the linear capacity of the auxiliary mapping, the output of the mapping on sc 1 ,t on the interval [−10, 0] does not change in time, nor does it change with respect to any other state variable.Like 14 , we arrive at a model that can predict different regimes, albeit in very different fashion.Bottom: The time trace of PTOT/PLH for AUG #34814, with horizontal line marking where PTOT/PLH = 1.

FIG. 9 .
FIG.9.Even with the auxiliary regressor, the state representation and forward model are still able to capture complex time evolution of AUG discharges.The true density and temperature time traces at various ρ are opaque to show the slight differences with predicted time traces.
FIG.11.AUG #36669, the top two plots are the true and forward model predicted temperature profiles and remaining plots are the corresponding actions for the pulse.The y-axis for the actions figures are the units, and their corresponding name is given within the plot.In AUG #36669, the device is configured to test actively cooled divertor plates, and accumulation of impurities lead to an increase in core radiated power15 , dropping the core temperature.The actions supplied to the model do not sufficiently encode this information and the model does not predict the temperature decrease.It is likely that including additional actions that are more correlated with detached/attached plasmas would increase the resilience of the model, such as those used in5 .Respective errors of the density and temperature at ρ = 0.0, 0.5, 0.9, 1.0 are given in TableV.Predictions obtained from the model without the auxiliary regressor.

TABLE I .
Observational model architecture.The observations are 1D profiles with two channels, the density and temperature.The 1D convolution and 1D transposed convolution layers are denoted as Conv.andTransp.Conv.respectively.The parameters of the convolution are denoted as (in channels=i, out channels=o, kernel width=k, stride=s), i.e., a Convolution layer with 2 in channels, 4 out channels, a kernel width of 4 and stride of 2 would be denoted as Conv.(2,  4, 4, 2).The Encoder to State has two components, denoting the mean and standard deviation of the latent variable.

TABLE II
. Forward Architecture.The initial layer of the forward model has size 8 (state size) + 9 (action size) = 17.

TABLE III .
Objective weights and training parameters used for the SRL model.All weights are scalar multiplied by their corresponding objective value per mini-batch update.KL obs is applied to the KL term in L obs .KL f is applied to the KL term of in L f .L 1 o t is applied to the L 1 term of L obs .L 1 p t is applied to the L 1 pressure penalty.

TABLE IV .
The MAE of the observational model's reconstructions for various ρ.The MAE for AUG #36150 (Fig.3) is provided to compare with the average over the test set discharges.Standard deviations for all values are calculated over 100 sample reconstructions, given the injection of noise in the VAE. 19

TABLE V .
The MAE of the forward model's reconstructions for various ρ.The MAE of the AUG discharges visualized in this work are provided for comparison with the average over the test set discharges.Standard deviations for all values are calculated over 100 sample reconstructions.

TABLE VI .
The mean-absolute percentage error (MAPE) of the forward model's reconstructions for various ρ.The MAPE is calculated as the L 1 difference between the predicted and true value, divided by the true value.The MAPE of the AUG discharges visualized in this work are provided for comparison with the average over the test set discharges.Standard deviations for all values are calculated over 100 sample reconstructions.Large deviations (MAPE > 100%) in the edge are expected, as the temperature and density tend to be relatively low (ne < 10 18 m −3 , Te < 140eV).
believe an important question is what information, and at what frequency, is needed to predict future plasma states.Additional questions arise; assuming the true plasma state is Markovian, as we suspect, then what observables are necessary to capture that?Also, if we can approximate the true state sufficiently well in a low dimensional representation, then a) what observations are used to learn such a state and b) what actions are needed to (accurately) propagate that state in time?

ACKNOWLEDGMENTS
This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -EUROfusion).Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission.Neither the European Union nor the European Commission can be held responsible for them.This work made use of the Finnish CSC computing infrastructure under project #2005083.The work of A.E.J and A.K. was partially supported by the Research Council of Finland grant no.355460.The authors would like extend thanks to Professor