Aligning pedestal models and associated magnetohydrodynamic codes with experimental data is an important challenge in order to be able to generate predictions for future devices, e.g., ITER. Previous efforts to perform calibration of unknown model parameters have largely been a manual process. In this paper, we construct a framework for the automatic calibration of JOREK. More formally, we reformulate the calibration problem into a blackbox optimization task, by defining a measure of the discrepancy between an experiment and a reference quantity. As this discrepancy relies on JOREK simulations, the objective becomes computationally intensive and, hence, we resort to batch Bayesian optimization methodology to allow for efficient, gradientfree optimization. We apply this methodology to two different test cases with different discrepancies and show that the calibration is achievable.
I. INTRODUCTION
Scientific research to demonstrate the viability of fusion energy as a source of electricity at industrial scales is progressing with the construction of the international device ITER^{1} as well as other domestic programs like STEP in the UK.^{2} In order to maximize the performance of fusion devices, loss of energy from the core plasma must be minimized. Edge localized modes (ELMs) are a type of instability which occur at the edge transport barrier of the plasma in tokamak devices.^{3} ELMs are a cyclic instability that release energy from the edge, in the form of plasma filaments aligned to the magnetic field.^{4} This energy travels along magnetic field lines, outside the confined plasma, toward wall components of the device. Experimental evidence and theoretical works have supported the idea that these events can be characterized using magnetohydrodynamic (MHD) equations and more specifically, viewed as a peeling–ballooning instability.^{5} These equations serve as a basis for modeling Tokamak plasma instabilities. In recent years various nonlinear codes^{6–11} for solving these equations have been developed, among these we focus on JOREK.^{7} These are able to replicate complex physical dynamics, such as twofluid diamagnetic rotation at low resistivity and viscosity levels, as well as high poloidal–toroidal resolutions.^{12,13}
Efforts have been devoted to calibrating JOREK simulations to be as close as possible to reality, with the aim of generating outputs that are quantitatively consistent with experimental observations.^{12,14} Energy losses, ELM intensity and duration, pressure in the pedestal region, divertor heat fluxes, and kinetic energy are some of the quantities which are under scrutiny when comparing simulations and real experiments. As expected, calibrating simulations to be experimentally relevant depends strongly on the judicious selection of simulation parameters, which are not always directly observed, and, therefore, have to be inferred from experiments. Prior to calibration, parameter ranges must first be identified. These are typically selected to reflect physical feasibility based on theory and prior knowledge elicited from previous experiments.^{14} However, parameter ranges might also be restricted as some parameter values may lead to numerical instabilities when performing the simulation. This often gives rise to a challenging tradeoff as the most experimentally relevant values of certain parameters, such as resistivity or viscosity lie close or within the region of numerical instability. This tradeoff can be mitigated by allocating more computational budget to allow simulations to be run at a higher fidelity.
A. Goals and contributions of this work

Running a JOREK simulation is an extremely computationally intensive process. In addition to computational cost, simulations might fail and need to be restarted,^{14} and gradients with respect to input parameters are not available. These characteristics are, therefore, inherited by the discrepancy, as its computation directly involves the evaluation of JOREK.

Due to the complexity of the simulation underpinning JOREK, gradients of the discrepancy are not readily computed. As the direct evaluation of gradients is not possible, we would need to numerically estimate derivatives. This operation would involve running multiple JOREK simulations, which would make any standard gradientbased minimization strategy infeasible for calibration.
It is, therefore, necessary to rely on gradientfree methods, such as Bayesian optimization (BO). For this calibration problem, a novel batch Batch BO (BBO) method^{15} is deployed and generalized by introducing a probabilistic extension of the acquisition function and using particle gradient flows (PGFs). Sections II and III present more details on the JOREK simulations and subsequently on the BO methodology used. Results are shown on how calibration was tested on different JOREK models, describing the construction of the discrepancy function on which the BBO framework was applied.
II. JOREK
JOREK is a nonlinear MHD code developed as a versatile method to study instabilities of tokamak plasmas and their control.^{6,7,16} More specifically, it was created to gain more knowledge on ELMs, and in recent years qualitative and quantitative comparisons between JOREK output and simulations have been explored.^{6,12,13}
The model used in the work presented here^{17} focuses on five variables: the poloidal magnetic flux ψ, the electric potential $\Phi $, the parallel velocity $ v / /$, mass density ρ, and temperature T. It is a socalled reduced MHD model as it assumes that the perpendicular velocity lies in the poloidal plane and the toroidal magnetic field is constant in time. The code allows for a grid that realistically represents the tokamak, with a geometry that includes an Xpoint and the divertor and a particular focus on the scrapeoff layer and other areas of interest for ELMs. For a more complete description of the reduced MHD equations and numerical schemes of the JOREK code, the reader may refer to the main reference for the JOREK code.^{6}
In the usecases presented here, two reduced models are considered, where the parallel velocity $ v / /$ is either included or excluded. The first usecase focuses on examples of tearing mode instabilities without parallel velocity, while the second usecase addresses simulations of ELMs in Xpoint geometry, using a representative equilibrium, and including the parallel velocity. Section III describes the BBO framework to calibrate JOREK simulations.
III. METHODOLOGY
Bayesian optimization (BO) methods^{18} are a class of general purpose optimization algorithms aimed at optimizing blackbox functions, i.e., functions which are computationally expensive to evaluate and for which ancillary information, such as gradients, is not readily available.
Bayesian methods have been applied to fusion plasma physics for the scope of model calibration before. For example, this framework has been used in conjunction with approximate Bayes computation (ABC) for the calibration of the fluidkinetic model DREAM^{21} with a specific focus on runaway electrons,^{22} which is another leading cause of concerns for the integrity and duty cycle of future fusion reactors. In addition, we refer the reader to another approach making use of the novel Validation via Iterative Training of Active Learning Surrogates (VITALS), which the authors used to study Lmode plasmas in the Alcator CMod tokamak and for the validation process of the quasilinear transport models TGLFSAT1 and TGLFSAT0.^{23}
A. Batch BO through particle gradient flows
Batch Bayesian optimization (BBO) methods seek to propose a batch of multiple evaluation points in each step. This is advantageous in settings where the multiple evaluations can be evaluated in parallel across several processors. While this enables the acquisition of a larger amount of information per step, it is important to note that this will increase the complexity of the inner optimization loop, as the acquisition function will need to be adjusted for this setting. At each step of a batch BO method, we seek to find a batch of N points to evaluate the optimization target: ideally these functions' evaluations are run in parallel in order to be as efficient as possible. More formally, assume we have already made n evaluations of the objective function f given by $ D n$. Differently from the standard BO approach, the batch BO framework seeks to select the next batch of evaluation points $ { x n + 1 , \u2026 , x n + N}$.
In this framework, the batch consists of a collection of nonindependent samples from the monopoint distribution $ \mu \alpha *$, which is obtained by using particle gradient flows, a set of methods that define strategies to propagate an initial density so that it will converge asymptotically to a distribution of interest. Two specific gradient flows for (8) are derived in the original paper,^{15} one based on the Stein geometry^{28} and another using the Wasserstein geometry.^{29}
Here, ε represents a step size parameter for the optimization and α is the regularization parameter which we introduced in Eq. (8). The quantity $ \Phi : R Nxd \u2192 R Nxd$ is an update step which can be defined according to the Stein geometry or the Wasserstein geometry, and for the full derivation we refer to the original paper.^{15} The set of particles $ x T \u2208 R Nxd$, where N indicates the ensemble used to approximate the optimal distribution $ \mu \alpha *$ in Eq. (8), is the batch that can be evaluated in parallel.
All the components needed to perform the calibration of JOREK are now available. The steps for the calibration are listed in Algorithm 1. The iterative procedure is schematic and straightforward: given some information on the discrepancy $ d x o$, the BO algorithm selects a set of parameters $ \theta 1 , \u2026 , \theta N$, which will be used to simulate JOREK. Once a simulation is finished and the related data are produced, it proceeds with updating our knowledge on the discrepancy function. Therefore, this cycle is repeated until a satisfactory match between the reference and simulation is obtained. In general, different stopping criteria for the outer loop can be implemented, in this case a fixed integer I is set to represent the number of iterations of the BO selection process.
Input Dataset $ D n = { \theta i , d x o ( J ( \theta i ) )} i = 1 n$, batch size N, and budget for BO iterations I. 
Output Updated dataset $ D n + I * N$ 
1: for $ i = 1 , \u2026 , I$ do 
2: Fit a GP to $ D n$; 
3: Run BBO via PGF conditioned on $ D n$ using Eq. (9), and obtain a set $ { \theta \u2032 1 , \u2026 , \theta \u2032 N}$ of locations to minimize $ d x o$; 
4: Evaluate JOREK in parallel $ ( J ( \theta j \u2032 ) )$ for $ j = 1 , \u2026 , N$; 
5: $ D n \u2190 D n \u222a { \theta j \u2032 , d x o ( J ( \theta j \u2032 ) )} j = 1 N$; 
6: end for 
Input Dataset $ D n = { \theta i , d x o ( J ( \theta i ) )} i = 1 n$, batch size N, and budget for BO iterations I. 
Output Updated dataset $ D n + I * N$ 
1: for $ i = 1 , \u2026 , I$ do 
2: Fit a GP to $ D n$; 
3: Run BBO via PGF conditioned on $ D n$ using Eq. (9), and obtain a set $ { \theta \u2032 1 , \u2026 , \theta \u2032 N}$ of locations to minimize $ d x o$; 
4: Evaluate JOREK in parallel $ ( J ( \theta j \u2032 ) )$ for $ j = 1 , \u2026 , N$; 
5: $ D n \u2190 D n \u222a { \theta j \u2032 , d x o ( J ( \theta j \u2032 ) )} j = 1 N$; 
6: end for 
IV. EXPERIMENTAL RESULTS AND TESTS
This section presents the two JOREK usecases that were used with the reduced MHD model, namely, the Tearing mode instability without parallel velocity and the ELM simulation with parallel velocity. Both examples have been presented in previous JOREK studies.^{6} A discrepancy is defined for the optimization, which represents the difference between the reference data and the simulated data, which need to be reduced as much as possible to make the two outputs match. For all examples, a reference experiment is generated from JOREK itself, and then the BO algorithm is used to trace back the exact parameters that generated it. For these examples, the focus was on the following input parameters:

η: Resistivity at plasma center.

μ: Viscosity at plasma center.
For the tearing mode usecase, we consider the problem of calibrating the above two parameters. We additionally demonstrate a more complex case where we calibrate two additional input parameters: $ D \u22a5$ (the perpendicular particle diffusion) and $ \kappa \u22a5$ (the perpendicular heat diffusion).
For a more detailed overview of the BBO method we use to calibrate JOREK, and the initial setup of each of the examples, we refer the reader to Sec. IV C. More specifically, we specify the initial exploratory simulations performed in Table II and provide more details on the parameters used for the BBO routine in Table III.
A. Tearing mode usecase
As a first test case, the automatic tuning of a tearing mode in a large aspectratio circular plasma is considered. More information on this tokamak core plasma instability and an example can be found in Hoelzl et al.^{6} (e.g., Fig. 18 of that paper). To test the approach, reference data from a JOREK simulation is used for a specific set of “groundtruth” parameters. The objective is to recover these parameters through the calibration process. The simulations of JOREK are run for a total of 150 000 normalized time units, which corresponds to approximately 75 ms. Each simulation takes around 4 min to complete. At each BO step, a batch of ten JOREK simulations is evaluated in parallel over a distributed HPC cluster. Two calibration problems are performed: a twovariable problem for θ = $ ( \eta , \mu )$ (example 1) and a more complex one over four parameters θ = $ ( \eta , \mu , D \u22a5 , \kappa \u22a5 )$ (example 2).
.  Reference .  Calibrated parameters .  I .  

η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ .  η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ .  
Example 1  1 × 10^{−6}  1 × 10^{−7}  ⋯  ⋯  10.01 × 10^{−7}  9.59 × 10^{−8}  ⋯  ⋯  5 
Example 2  1 × 10^{−6}  1 × 10^{−7}  1 × 10^{−7}  1 × 10^{−7}  9.90 × 10^{−7}  8.86 × 10^{−8}  6.93 × 10^{−8}  17.51 × 10^{−8}  30 
Example 3  1 × 10^{−6}  1 × 10^{−6}  ⋯  ⋯  10.08 × 10^{−7}  10.16 × 10^{−7}  ⋯  ⋯  11 
Example 4  1 × 10^{−6}  1 × 10^{−6}  ⋯  ⋯  10.64 × 10^{−7}  10.29 × 10^{−7}  ⋯  ⋯  10 
.  Reference .  Calibrated parameters .  I .  

η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ .  η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ .  
Example 1  1 × 10^{−6}  1 × 10^{−7}  ⋯  ⋯  10.01 × 10^{−7}  9.59 × 10^{−8}  ⋯  ⋯  5 
Example 2  1 × 10^{−6}  1 × 10^{−7}  1 × 10^{−7}  1 × 10^{−7}  9.90 × 10^{−7}  8.86 × 10^{−8}  6.93 × 10^{−8}  17.51 × 10^{−8}  30 
Example 3  1 × 10^{−6}  1 × 10^{−6}  ⋯  ⋯  10.08 × 10^{−7}  10.16 × 10^{−7}  ⋯  ⋯  11 
Example 4  1 × 10^{−6}  1 × 10^{−6}  ⋯  ⋯  10.64 × 10^{−7}  10.29 × 10^{−7}  ⋯  ⋯  10 
B. ELM usecase
To demonstrate the efficacy of the approach, the problem of calibrating an ELM simulation is addressed. For more information regarding this example, please refer to Hoelzl et al.^{6} (e.g., Sec. 5 of the paper). The reduced MHD model with parallel velocity used here is able to capture the peeling–ballooning dynamics of ELMs with finite toroidal mode numbers, and provide a measure of the heat fluxes incident on the divertor targets [at the bottom of the domain as shown in Fig. 3(d)] as a result of the ELM crash.
As before, a reference experiment with a fixed set of parameters η and μ is run and compared with simulations. These ELM simulations require to start first with the evolution of the toroidally axisymmetric equilibrium before continuing with a finite toroidal mode number. A total of 325 steps with varying time lengths are run, with each simulation taking around 1 and 2 h to complete. As above, at each step of the BO procedure, a batch of ten θs is selected that will be used in new simulations. The input parameters η and μ are calibrated using the time series of the kinetic energy of the ballooning modes. The results of this calibration are shown in Fig. 4 with more details on the optimization performed available in Table I. The figures clearly demonstrate that the batch BO algorithm successfully calibrated the parameters.
In this way, using the heat fluxes will make the divergence be more susceptible to the nonlinear phase and the outbursts of energy on the divertor themselves. This is clear from Fig. 3(e), as we can observe that the onset of the peeling–ballooning instability has a bigger magnitude than the linear phase, which was not as distinguishable as when using the kinetic and magnetic energies. In addition, this is directly relevant to routine research of ELM simulations with the JOREK code, since quantitative comparison against infrared measurements of the divertor heat fluxes is often used as a means to validate simulations and the choice of unknown input parameters, such as μ.^{12} Results of the calibration are available in Fig. 5 and more details are presented in Table I (example 4).
C. Details on experiments
In this section, we present more details on the BO algorithm deployed. Before we start the BBO procedure, we need to collect some information on the discrepancy, more formally, we need to run an initial explorative JOREK simulation. In Table II, we have listed the values of the QoI for the initial simulations for each example. Using these starting simulations, we can proceed with the BO procedure. First, we implement a Gaussian process prior to using a Matérn52 kernel and the code is implemented using the jax library.^{30} As explained in Sec. III A, the pivotal part of BO is the inner optimization loop. The initial particles' location, namely, $ x 0$ is sampled randomly from a certain domain, whose bounds are specified for each experiment in Table I. Finally, we choose T by controlling the norm of $\Phi $: we let the chain run for long enough until the particles do not move anymore. For each of the four examples presented, we specify which gradient flows we use, for how many steps T we run the optimization of the inner loop step, alongside ε and α, and we report these quantities in Table III.
.  η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ . 

Example 1  1 × 10^{−7}  1 × 10^{−8}  ⋯  ⋯ 
1 × 10^{−7}  20 × 10^{−8}  ⋯  ⋯  
Example 2  8.352 × 10^{−7}  4.765 × 10^{−8}  4.838 × 10^{−8}  12.160 × 10^{−8} 
14.116 × 10^{−7}  8.269 × 10^{−8}  16.058 × 10^{−8}  8.447 × 10^{−8}  
2.336 × 10^{−7}  13.990 × 10^{−8}  6.185 × 10^{−8}  11.314 × 10^{−8}  
10.835 × 10^{−7}  11.247 × 10^{−8}  13.615 × 10^{−8}  18.687 × 10^{−8}  
13.748 × 10^{−7}  15.517 × 10^{−8}  0.371 × 10^{−8}  17.731 × 10^{−8}  
Example 3  10 × 10^{−7}  60 × 10^{−7}  ⋯  ⋯ 
Example 4  10 × 10^{−7}  60 × 10^{−7}  ⋯  ⋯ 
.  η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ . 

Example 1  1 × 10^{−7}  1 × 10^{−8}  ⋯  ⋯ 
1 × 10^{−7}  20 × 10^{−8}  ⋯  ⋯  
Example 2  8.352 × 10^{−7}  4.765 × 10^{−8}  4.838 × 10^{−8}  12.160 × 10^{−8} 
14.116 × 10^{−7}  8.269 × 10^{−8}  16.058 × 10^{−8}  8.447 × 10^{−8}  
2.336 × 10^{−7}  13.990 × 10^{−8}  6.185 × 10^{−8}  11.314 × 10^{−8}  
10.835 × 10^{−7}  11.247 × 10^{−8}  13.615 × 10^{−8}  18.687 × 10^{−8}  
13.748 × 10^{−7}  15.517 × 10^{−8}  0.371 × 10^{−8}  17.731 × 10^{−8}  
Example 3  10 × 10^{−7}  60 × 10^{−7}  ⋯  ⋯ 
Example 4  10 × 10^{−7}  60 × 10^{−7}  ⋯  ⋯ 
.  Iterations .  ε .  α .  T .  PGF . 

Example 1  1  0.01  0.001  3000  Stein 
2–5  5000  
Example 2  1–10  0.01  5 × 10^{−4}  10 000  Wasserstein 
10–30  1 × 10^{−4}  
Example 3  1  0.01  0.001  10 000  Wasserstein 
2–11  15 000  
Example 4  1,2  0.01  0.01  12 500  Wasserstein 
3  0.001  
4–10  5 × 10^{−4} 
.  Iterations .  ε .  α .  T .  PGF . 

Example 1  1  0.01  0.001  3000  Stein 
2–5  5000  
Example 2  1–10  0.01  5 × 10^{−4}  10 000  Wasserstein 
10–30  1 × 10^{−4}  
Example 3  1  0.01  0.001  10 000  Wasserstein 
2–11  15 000  
Example 4  1,2  0.01  0.01  12 500  Wasserstein 
3  0.001  
4–10  5 × 10^{−4} 
The most influential parameter that controls the precision of the BBO algorithm is the regularization α. This term controls how sparse the batch for maximization of the inner loop is, i.e., the actual parameters used for JOREK simulations, and the lower it is, the less diverse are the particles. Generally, we fix it at the beginning as relatively high and let it decrease as we progress with the calibration. We would like the BBO method to explore more at the beginning of the experiment, in order to retrieve a general approximation of the discrepancy. Once we have a general landscape for the underlying discrepancy, we can proceed to exploit the information collected more aggressively, by letting points get close to each other. This strategy was performed in the more difficult examples, i.e., examples 2 and 4, whereas for the simpler cases this strategy was not necessary. Another way to enforce exploitation is to restrict the bounds over which we sample the starting points $ x 0$ for the chain. The values of the bounds over which we perform the optimization can be found in Table IV. The strategy of restricting the bounds to enforce exploitation was performed for example 4, where at iteration 8 we restricted the domain to (5,40) × (0,70)1 × 10^{−7}, as we observed local minima to be only in that area. Similarly, for example 3, we change the domain to (5,20) × (5,20)1 × 10^{−7} at iteration 9. However, in order to promote exploration of the space, we add a white noise kernel to the GP kernel. In this way, especially in the case that the function to optimize presents a plateau, particles are allowed to move more freely in an area of low variation thanks to the added noise. We use this strategy in example 3: the discrepancy (11) in this case presents a large area that produces simulations similar to the reference one.
.  Bounds .  

.  η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ . 
Example 1  (0,20)*1 × 10^{−7}  (0,20)*1 × 10^{−8}  ⋯  ⋯ 
Example 2  (0,20)*1 × 10^{−7}  (0,20)*1 × 10^{−8}  (0,20)*1 × 10^{−8}  (0,20)*1 × 10^{−8} 
Example 3  (0,40)*1 × 10^{−7}  (0,40)*1 × 10^{−7}  ⋯  ⋯ 
Example 4  (0,80)*1 × 10^{−7}  (0,80)*1 × 10^{−7}  ⋯  ⋯ 
.  Bounds .  

.  η .  μ .  $ D \u22a5$ .  $ \kappa \u22a5$ . 
Example 1  (0,20)*1 × 10^{−7}  (0,20)*1 × 10^{−8}  ⋯  ⋯ 
Example 2  (0,20)*1 × 10^{−7}  (0,20)*1 × 10^{−8}  (0,20)*1 × 10^{−8}  (0,20)*1 × 10^{−8} 
Example 3  (0,40)*1 × 10^{−7}  (0,40)*1 × 10^{−7}  ⋯  ⋯ 
Example 4  (0,80)*1 × 10^{−7}  (0,80)*1 × 10^{−7}  ⋯  ⋯ 
V. DISCUSSION ON CALIBRATION
The test cases presented, while relying on JOREK simulations and not real data, offer interesting insights into the feasibility of automatic calibration of JOREK to experimental data. While the tearing mode usecase is straightforward in the two variable setting, the example over four variables required more JOREK evaluations to reach satisfactory results. As shown in Table I, it required us 30 BO iterations for the higher dimensional example to reach a satisfactory result, whereas only 5 BO iterations were sufficient for the two variable setting to replicate the reference data. As the full parameter space for JOREK is substantially larger than the one explored in this demonstrative study, it is expected that a full parametric calibration of ELM simulations would require substantial computational resources. As demonstrated in the last example, this can be mitigated by carefully constructing loss functions tailored to QoIs under consideration. Additionally, leveraging alternative surrogate models which scale better to highdimensional problems would also help address this challenge. The examples presented here also showed how most parameters do not affect the solution in the same way and that the loss function is highly nonconvex with many areas that can generate JOREK solutions with similar characteristics to the reference. This issue of the identifiability and uniqueness of the optima is exacerbated in the ELM usecase, especially in the case where we consider the new discrepancy which makes use of the energy profiles, defined in Eq. (12). This is particularly evident in Fig. 6, where the contour of the discrepancy is plotted by interpolating the points selected by the calibration framework. The algorithm not only correctly identifies the area where the minimum lies but also explores a larger local minimum, which provides suboptimal solutions but close to the reference experiment. Once again, the construction of more sophisticated loss functions can play a big role in addressing this issue.
VI. CONCLUSION AND FUTURE WORK
In conclusion, in this paper, we have defined a framework for the automatic tuning of the expensive nonlinear MHD code JOREK. We have highlighted difficulties and challenges related to this task through four test cases, where we used as reference experiments simulations from JOREK itself. BO approaches are a natural way of dealing with expensive models for the purpose of calibration, as they replace the unobserved and possibly noisy evaluations by a surrogate model that is easy to work with. We leave for future work the tuning with respect to real experimental data, as we expect that these difficulties will be exacerbated in this extension. Experimental errors and the quality of the data could significantly affect the outcome of the proposed optimization procedure, and the quantification of these terms in relation to our proposed methodology is left as a subject of further studies. Despite this, the surrogate model approach given in the BO framework provides a flexible way to incorporate any possible experimental error in the data. Gaussian processes can take into account uncertainties in a quantitative approach: by modifying the kernel function of choice, one can address various assumptions on the data available. In addition to this, more realistic cases will require the deployment of more complex and expensive models for simulations, while focusing on a much larger parameter space. We believe that extending our BO approach to more sophisticated surrogate models and carefully designing more informed discrepancy functions can help address all these concerns, which will require a separate and extensive study. In addition, while the BO method allows to find a point estimate for the calibration problem, the investigation of the alternative methods that allow a faithful reconstruction of the whole discrepancy could help address the problem of the uniqueness of the solution. As an example, reframing the calibration of JOREK as a Bayesian inverse problem that will target a full posterior distribution of parameters, which will quantify the probability that some parameter can replicate some observed output, would allow to circumvent some of the problems related to suboptimal solutions while allowing for a better interpretability of the role of each input parameter.
ACKNOWLEDGMENTS
This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No. 101052200—EUROfusion). The views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. This work was partially carried out using EUROfusion High Performance Computer MarconiFusion through EUROfusion funding. A.D. was supported by Wave 1 of The UKRI Strategic Priorities Fund under EPSRC Grant No. EP/T001569/1 and EPSRC Grant No. EP/W006022/1, particularly the “Ecosystems of Digital Twins” theme within those grants and The Alan Turing Institute.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
E. Crovini: Conceptualization (lead); Writing – original draft (lead). S. J. P. Pamela: Conceptualization (supporting); Supervision (equal); Writing – review & editing (equal). A. B. Duncan: Conceptualization (supporting); Supervision (equal); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.10391738, Ref. 31.