This paper provides a concise overview of the mini-conference on Digital Twins for Fusion Research held during the American Physical Society Division of Plasma Physics conference in Atlanta, GA, from October 7 to October 11, 2024, which convened experts from national laboratories, universities, and industry. The mini-conference focused on the promises and challenges of developing a digital twin for fusion to stimulate discussion and share ideas that can accelerate the pace of scientific discovery. Presentations showcased a rapidly growing ecosystem of physics-based simulations, data assimilation strategies, and artificial intelligence (AI) methods that together are moving digital twins from concept to operational reality. Participants emphasized the growing need for robust uncertainty quantification, standardized data formats, and open interfaces that merge legacy codes, large-scale simulations, and real-time diagnostics. Panel discussions converged on a common vision of digital twins as living platforms, continuously ingesting new experimental data, refining predictive accuracy, and guiding both present operations and future device designs. While significant technical challenges remain, intelligent digital twins represent a paradigm shift in how the community approaches fusion research, operations, and engineering. By unifying high-performance computing, AI, integrated modeling, data assimilation, and advanced control, digital twins are set to become indispensable tools in achieving the long-sought goal of commercial fusion energy.

Fusion energy stands at the forefront of humanity's quest for sustainable, clean power, offering a potential solution to critical challenges of climate change and global energy demand. Recent breakthroughs in both inertial and magnetic confinement fusion have reignited enthusiasm for this transformative technology. The National Ignition Facility's (NIF's) demonstration of fusion ignition,1 and the sustained high-performance conditions in magnetic confinement systems,2 mark significant milestones in the field. These advancements have catalyzed unprecedented growth in the private fusion industry,3 with more than $7B invested in numerous startups racing to commercialize fusion energy.

Yet the complexity of fusion plasma physics and engineering integration presents formidable barriers to commercialization. The National Academies of Science, Engineering, and Medicine (NASEM) study on burning plasma science4 highlights several critical challenges in achieving commercially viable fusion energy. These include maintaining plasma confinement and stability, managing heat and particle exhaust, developing fusion-compatible materials, and integrating complex systems for continuous operation. Notably, the report emphasizes the need for accelerated learning cycles and improved predictive capabilities. Experimentation alone is slow and expensive, and even the most advanced integrated simulations often remain too slow or too inflexible for real time or interactive use in scenario design and control.

Digital twins offer a transformative approach for merging the digital and physical realms, turning complex, real-time data into actionable insights that enhance decision-making, mitigate risks, and drive rapid innovation in fusion. Originating in the aerospace, automotive, and manufacturing sectors, digital twins have proven their value in supporting simulation-first design, predictive maintenance, and optimized performance.5–8 However, the dramatic advances in artificial intelligence (AI) have transformed the traditional passive digital twin into what today is described as an active intelligent digital twin (iDT).9 An iDT integrates AI and machine learning (ML) into traditional twin frameworks with unprecedented potential to accelerate scientific discovery and technological innovation. By creating high-fidelity virtual replicas of complex physical systems, iDTs can enable rapid iteration, optimization, and testing of ideas in a low-risk virtual environment, allowing researchers to explore vast multi-dimensional parameter spaces, identify complex interdependencies, and make data-driven decisions at a pace and scale not possible with traditional methods alone.

Translated into the fusion domain, iDTs have the potential to dramatically accelerate the development of commercial fusion energy by revolutionizing the design, testing, and optimization processes. They enable rapid, cost-effective iteration of reactor designs in a low-risk environment while integrating operations, maintenance, and control into the design process, potentially reducing cost and risk in developing and operating physical systems. These virtual environments allow for the exploration of vast parameter spaces and parallel testing of multiple design variants, thereby accelerating development cycles and increasing the number of design iterations. By integrating virtual operation in the design process, the iDT enables early identification of potential design and operational issues, mitigating risks and reducing costs associated with developing and operating physical systems. On existing fusion facilities, iDTs can open up new possibilities for AI-driven discoveries of novel operating conditions and fusion configurations, which can translate to improved reactor concepts. Conversely, existing fusion facilities are indispensable proving grounds for the development and validation of iDTs. They provide critical real-world data and operational feedback that allow digital twin models to be calibrated, refined, and benchmarked against actual performance. This in situ validation is essential to ensure that iDTs can accurately simulate complex plasma behaviors, integrate multi-physics phenomena, and ultimately serve as reliable platforms for guiding the design and operation of future fusion reactors. Collectively, these advantages could compress decades of traditional fusion development into years, significantly reducing overall costs and increasing the probability of success. This acceleration can be highly enabling for the commercial development of fusion energy and the realization of the ambitious timeline set forth in the United States government's bold decadal vision.10 

The workshop was organized into three topical sections: Intelligent digital twins for fusion energy (Sec. II), High-fidelity integrated modeling frameworks (Sec. III), and AI/ML surrogates for speed and scalability (Sec. IV). Conclusions and an outlook for iDT development are presented in Sec. V.

Intelligent digital twins are more than advanced simulations. They represent bidirectional, always-on frameworks that continuously assimilate experimental data into physics-based and data-driven models, thereby maintaining a consistent, predictive representation of the physical system's state. Two essential characteristics define iDTs: (1) real-time (or near-real-time) synchronization with the physical asset via diagnostics and control systems and (2) the ability to forecast system evolution, identify emergent behaviors before they manifest physically, and recommend control actions or design modifications. Intelligent digital twins are not static tools but are living infrastructures that adapt to evolving technology, accumulate collective knowledge, and become invaluable assets in the fusion commercialization pathway. Indeed, a useful way to differentiate iDTs from traditional integrated modeling is that the iDT operates autonomously or semi-autonomously by learning from new data and updating the digital asset while providing recommendations that inform design and operational decisions.

In fusion, iDTs must reconcile multiple layers of complexity:

  • Multi-physics integration: The plasma core is governed by nonlinear magnetohydrodynamics (MHD), turbulent transport, energetic particle dynamics, and reaction kinetics. Further out, the scrape-off layer (SOL) and divertor region involve plasma–neutral interactions, impurity seeding, and plasma–wall interactions. Coupled to these are engineering systems, for example superconducting magnets, blankets, heating and current drive systems, and pumping and fueling lines, each with its own control logic and constraints. The seamless integration of core-transport codes with SOL and divertor codes, impurity models, and advanced material response simulations to predict operational windows that ensure safe heat exhaust while maintaining performance is critical for a fully realized iDT.

  • Multi-scale complexity: The relevant spatial scales range from sub-millimeter boundary layers on the divertor surface to entire vacuum vessels. Temporal scales vary from microseconds for fast instabilities to seconds or longer for global equilibrium and heat balance times. Intelligent digital twins must handle this diversity of scales efficiently.

  • Predictive modeling and control: Intelligent digital twins must serve as copilots for operators, not just offline analysis tools. They should assist in experimental planning, scenario optimization, and disruption avoidance, and incorporate uncertainties in diagnostics and actuators. On longer time scales, the iDT must inform planning for experimental campaigns, upgrades, and new facilities.

  • Balance-of-plant integration: Coupling with coolant loops, tritium breeding blankets, and power conversion systems is needed to understand full-plant dynamics. The iDT could anticipate maintenance schedules, test new component designs, and optimize plant availability.

  • Continuous evolution and self-improvement: As more experimental data accumulates, and as high-performance computing (HPC) and ML techniques advance, the iDT continually refines its models, reduces uncertainties, and expands its applicability. With Leadership Computing-scale AI initiatives like FASST,11 models can draw on massive curated data resources to build foundation models for fusion systems, akin to large-language models but for fusion plasma physics and engineering.

  • Ecosystem-level integration: Fleet-level iDTs could learn collectively from multiple fusion facilities. Shared AI models trained on aggregated operational data would allow each reactor's twin to benefit from insights gained at others, accelerating learning and reducing downtime across the entire research landscape.

D. P. Schissel (General Atomics) presented a talk entitled “Introduction—APS DPP Mini-Conference: Digital Twins for Fusion Research.” This presentation gave a brief summary of iDTs and reviewed the goals and organization of the workshop. Speakers' presentations and panel discussions are summarized in this and the following two sections.

A. Davis (United Kingdom Atomic Energy Authority—UKAEA) presented a talk entitled “High Fidelity Digital Twins of Fusion Plant.” This presentation provided a brief overview of ongoing and planned iDT (intelligent digital twin) activities at UKAEA. These activities are driven by an urgent need to transition away from a test-based design to a simulation-first approach to engineering. Extreme-scale simulation and newly emerging AI methods offer a timely opportunity to address the challenges of designing the world's first fusion powerplants. It is now possible to perform high-fidelity simulations of complex, strongly coupled multi-physics systems of systems problems such as the tokamak. The advent of exascale computing offers the possibility that simulations may finally be powerful and actionable enough to preempt emergent phenomena that are typically only discovered once a plant becomes operational. At scale, by combining detailed computer-aided design (CAD) models that closely resemble real plants with fully coupled multi-physics models and access to supercomputers, a full-plant digital twin can be envisioned.

Several UKAEA facilities are planning iDTs. For example, presently under construction is the Combined Heating and Magnetic Research Apparatus (CHIMERA) facility that will perform integrated testing of liquid-metal breeding blanket concepts under electro- and thermomechanical loading. For fluid flow and heat transfer with CHIMERA in mind, high-fidelity direct numerical simulation to stress test turbulence models and to examine when approximations and low-order methods work has been undertaken on Frontier in collaboration with Argonne National Laboratory.

A MAST-U virtual tokamak is presently under construction based on the complete CAD datasets, which represent ∼160 × 106 element geometry. Some initial work was presented that included studying deflections under gravity as well as mechanical mode analysis based on vibrations.12 Utilizing Omniverse from Nvidia, an initial digital twin of a manufacturing cell has also been completed. These are all being used as test cases to assess the utility of high-fidelity digital twins and their promise for accelerating and de-risking commercial fusion. A key ingredient will be emulators such as generative adversarial neural networks, physics-informed neural networks, and other AI methods that can replace, augment, or accelerate the more traditional simulation tools.

T. Looby (Commonwealth Fusion Systems) presented a talk entitled “Digital Twin Development for SPARC and ARC at CFS.” This presentation showcased how CFS is using digital twins to rapidly iterate on SPARC and ARC designs, bypassing the need for extensive physical prototyping and thereby accelerating the timeline toward fusion energy.

Although much digital twin work is going on for SPARC, the presentation focused on digital twin workflows for the plasma-facing components (PFCs).13 Digital twin prototypes were shown for the first wall tile designs, divertor tiles, port plugs, and even an entire tokamak for nuclear heating. For all of these and more, entire assemblies of CAD components can be connected to plasma models to examine plasma material interaction and, more importantly, allow rapid iteration of the design. Digital twin prototypes need to be connected to an actual physical instance, and the design cycle (digital twin to physical object) was presented for a tungsten heavy alloy PFC. To catch discrepancies between ideal models and as-manufactured components, laser metrology is used to feedback into the concept CAD model so the digital twin has an as-built version.

Beyond design, digital twin prototypes are being used to examine component lifetime via operational budgets. The example was shown for hot-rolled tungsten which over its lifetime, can degrade via recrystallization. A lifetime recrystallization budget can be assigned, and a multi-physics simulation of heat loading on the component can determine if the component design will meet operational requirements using empirical SPARC tungsten data. Once SPARC is operational, thermocouple and IR camera data will yield temperature data, allowing for a calculation of recrystallization after each pulse. Thus, the digital twin becomes an operational tool as well.

Later SPARC campaigns will be partially devoted to understanding the minimum number of diagnostics that are needed to perform for ARC. The presentation showed a thermocouple example where a digital twin with a 3D CAD model that simulated power exhaust, thermal diffusion, and heat transmission to the thermocouple can provide data when the thermocouple is turned off. Finally, looking into the future when many ARCs are running in parallel, a digital twin aggregate could result in substantial cost savings through the prediction of component failure and by examining performance trends to optimize efficiency.

R. M. Churchill (Princeton Plasma Physics Laboratory) presented a talk entitled “A Vision for Simulation-based, Multi-Fidelity Digital Twins in Fusion Energy.” This presentation emphasized how layered workflows, from fast, low-fidelity surrogates to high-fidelity HPC codes, can be seamlessly integrated and continuously updated with data, aligning with the iDT definition and laying the crucial groundwork for advanced frameworks and methods.

The presentation summarized some initial work creating a framework for building digital models of stellarators for design optimization and design verification. This activity is focusing on the simulation hierarchy of fidelity, from low to high fidelity, that can be combined together in a workflow that models the plasma state consistently and includes full turbulence transport, neoclassical transport, fast ion transport, edge dynamics, and full MHD equilibrium solvers. Then, self-consistently, examines how the plasma affects the engineering components for design optimization.

The issue highlighted that for design optimization, many workflows must be run, yet even some medium-fidelity simulations of transport for Wendelstein 7-X could take 24 h. This is too long for an intense optimization exercise and points to the importance of the utilization of AI surrogate models for the most time-consuming part of these workflows. The example presented was a surrogate model of a GX turbulence simulation that could accelerate the design process.14 Needing to create a fast digital twin will require linking experimental data to HPC resources, and it was highlighted that the DOE/SC Integrated Research Initiative (IRI) could be critical for this undertaking.

The technique called simulation-based inference (SBI) was introduced as a method to take digital models and effectively update them with the real experimental conditions. An example was presented that took thousands of UEDGE simulations of the plasma edge and trained a normalizing flow SBI neural network that could use experimental data as inputs and correctly predict the anomalous transport coefficients.15 The SBI technique can also be used in diagnostic interpretation and was successfully demonstrated with the DIII-D LLAMA diagnostic. The SBI methodology based on AI models enables fast Bayesian inference that can form the backbone for digital twins for integrating simulation and experiment, including uncertainty quantification.

M. Kostuk (General Atomics) presented a talk entitled “Leveraging a Digital Twin (DT) to Enhance DIII-D Facility Operations.” This presentation highlighted how a DT framework can improve operational efficiency, minimize experimental risk, and help avoid scenarios that may damage the facility. The digital twin is being developed to capture expertise, automate it, accelerate it, and make it more accessible, with the goal of creating a virtual ecosystem that surrounds the physical asset. A selection of recent experiments at the DIII-D tokamak was digitally recreated in high fidelity using as-built device data, demonstrating the impact on experimental decision-making using a digital twin.

The first investigation involved plasma pulses with neutral beam heating and high carbon impurities, utilizing a DT to identify the beam(s) creating the impurity source and avoiding the use of experimental run time to do so. The IonOrb code16 was used to simulate the re-ionization of injected neutral beam particles, tracking their trajectory and wall impact location using a GPU-based Boris pusher algorithm. The issue of CAD data vs as-built measurements (sub-millimeter laser scan data) was presented in the context of mixing data modalities to both improve fidelity and yet maintain completeness. Under certain conditions, particles can significantly heat interior hardware components, such as during a neutral beam injection without plasma for diagnostic calibration. It was found that up to 15–20 MW/m2 could be deposited on the lower hybrid current drive antenna during such calibration shots. This is a significant result, as such heat loads are typically seen in the divertor. Additionally, a simulation of heating due to particle impact within a beam drift duct (located beyond the toroidal field coils) found good agreement with an observed hotspot exceeding nominal limits. To prevent future events, these results inform the development of improved operation limits and prediction of potential hazards.

Present work centers around the DT to inform decision-making in the control room and for long-term vessel modifications to avoid hazardous operations. Collaboration with Argonne National Laboratory as part of the DIII-D IRI Pathfinder project17 involves running large, time-critical simulations on leadership class computing resources. The broader success of the DIII-D DT requires predicting plasma equilibria with high accuracy and seamless integration of experimental and simulated data, both areas of active research.

Y. Morishita (Kyoto University) presented a talk entitled “Digital Twin Control for LHD Plasmas Based on Data Assimilation System ASTI.” This presentation illustrated how real-time diagnostic data can be assimilated to correct forecasts, guide control strategies, and reduce uncertainty, defining a key advantage of intelligent digital twins over conventional simulations. Furthermore, the operation of future fusion reactors requires nonlinear and multivariate control of fusion plasma behavior under conditions of limited measurement. To address this challenge, Y. Morishita presented an adaptive model predictive control system based on a data assimilation (DA) approach that has been developed and implemented in the Large Helical Device (LHD). The core part of the DA-based control system, ASTI,18,19 is a numerical system based on a DA framework, DACS, and a digital twin of the LHD plasma by an integrated simulation code.

Typically, DA is a statistical method to estimate the state vector, which consists of the variables in a digital twin, based on observation data and makes the behavior of the digital twin similar to that of the actual system. The DA framework used in ASTI incorporates digital twin adaptation with real-time measurements and control estimation that is robust to model uncertainties in the digital twin and observations. ASTI computes many simulations in real time to predict the probability distribution of future plasma states. Based on Bayes' theorem for the predicted distribution, ASTI estimates the optimal control input and the actual plasma state.

The DA-based control system implemented in LHD has been successfully applied to some control problems, such as central electron temperature control,20 radial profile control of electron temperature, simultaneous control of electron density and temperature, and simultaneous control of electron temperature and ion temperature. In the control experiment, the real-time Thomson scattering measurements were employed as the real-time observation, and the electron cyclotron heating, neutral beam injection heating, and gas-puff fueling were employed as the actuators. These experimental results demonstrate the effectiveness of the DA-based control, which compensates for the imperfections of the digital twin using real-time observations and addresses complex multivariate control problems involving unobserved variables. The DA-based control system allows for the harmonious connection of measurement, heating, fueling, and simulation and can provide a flexible platform for digital twin control of future fusion reactors.

J. M. Kwon (Korea Institute of Fusion Energy) presented a talk entitled “Virtual KSTAR (V-KSTAR): A Digital Twin for Real-Time Monitoring, 3D Visualization, and Diagnostic Simulation.” This presentation showcased how V-KSTAR integrates machine operation monitoring, high-fidelity simulation analysis, and advanced diagnostic modeling into a cohesive digital twin framework.21,22 This digital twin is composed of a data processing server, a framework and interfaces to integrate fusion simulations, visualization clients based on both Unity and Unreal Engine, and various libraries to handle 3D meshes and data. A wide variety of data, such as EPICS or MDSplus, can be ingested and converted to either IMAS (ITER's Integrated Modelling & Analysis Suite) or HDF5 format. Furthermore, an interface into large-scale simulations on HPC systems is also provided.

Examples presented of real-time monitoring included tokamak operation that allows detailed examination of specific components such as the central solenoid stack and the first wall. Various operational information such as temperature or plasma equilibrium reconstruction is presented in an interactive visualization that can be controlled as required. Complex high-fidelity fusion simulations can be integrated into this digital twin through its ability to handle unstructured meshes that are suitable for plasma simulation based on a CAD model. A rich library for 3D data analyses such as ray tracing and collision detection are included23 and the digital twin is designed with custom interfaces that allow the simulation to be run within the twin framework.

The utilization of digital twins to assist in the development of 3D simulation codes was also presented. Specific examples included the upgrade of the quasi-2D Monte Carlo NBI simulation code NuBDeC to fully three-dimensional that can utilize CAD-based first wall for detailed machine structure simulation. The resonant magnetic field line tracing code POCA was similarly upgraded to a full 3D capability. These new 3D codes can then be integrated into the virtual tokamak, which includes connections to a dedicated supercomputer for higher-fidelity simulations.

Looking toward the future, there is an ongoing activity to redesign the entire software structure to remove KSTAR-specific components to make the software platform very general and therefore applicable to any other existing tokamak. Additionally, the Open CASCADE Technology24 is being adopted for more systematic handling of CAD data. Finally, the introduction of Bayesian integrated data analysis (IDA) into the virtual tokamak platform has begun to result in more systematic and automated experimental data analysis.

M. de Baar (Dutch Institute for Fundamental Energy Research—DIFFER) presented a talk entitled “Inherent Challenges in Controlling Nuclear Fusion Plasmas.” This presentation offered a comprehensive exploration of the complexities and constraints faced when devising control strategies for high-performance fusion reactors. Achieving optimal and safe performance in fusion reactors requires managing a vast network of functions, from heating and current drive to magnetic configuration, burn dynamics, transport, and plasma–wall interactions. The system's complexity arises from the intricate interplay of continuous plasma processes and discrete events that are tightly coupled through a dense network of sensors and actuators.

To address this complexity challenge for robust control, a novel graph-based modeling framework was proposed. Central to this approach is the use of a Dependency Structure Matrix (DSM) that maps and analyzes the couplings among various components of the control system.25 By applying the framework to ITER, it was demonstrated how the overall system can be decomposed into distinct, relatively independent, weakly connected domains. Based on this insight, the core hypothesis is that a cooperative control strategy, where the cost functions of individual Model Predictive Controllers (MPCs) are extended to account for interdomain coordination, will outperform conventional, decentralized, independently developed controllers.

The research approach involved deriving tailored MPCs for different domains, such as density, current density, temperature, and exhaust control,26–28 and then assessing how interactions across these domains affect overall system performance. The key finding was that a cooperative control strategy, wherein the cost functions of individual MPCs are extended to include interdomain coordination, can achieve a Pareto optimal balance in multi-objective optimization. Rather than optimizing each control domain independently, integrating them can lead to superior performance across the entire reactor system.

Synthesis-based engineering of a supervisory controller was discussed. The discharge supervision decides what the controllers should do, subject to the (discrete) plasma state, while the continuous controllers decide how to do it. An example was presented in which a hybrid mode was controlled by a deliberately poor controller. If the hybrid mode is lost, the current density peaks, and uncontrolled sawtoothing emerges. The sawteeth can trigger NTM's (Neoclassical Tearing Mode) that are governed by the Rutherford–LaHaye equation for island width and rotational frequency evolution. An MPC with two inputs (electron cyclotron heating and neutral beam injection) can be used to affect these parameters, and the prediction of the discretized NTM state is made available to the supervisor. A model of 1012 discrete states was synthesized, and the supervisory was, at least in a modeled environment, shown to react to the discrete events of mode growth and mode locking.

O. Meneghini (General Atomics) presented a talk entitled “FUSE: Digital Twin Framework for Tokamak Fusion Power Plant Design and Operations.” This presentation introduced how the platform—originally developed for designing a fusion pilot plant (FPP)—is now being extended into a more general solution. FUSE29,30 employs a genetic algorithm to identify viable machine designs that optimize performance, cost, and risk objectives while adhering to physics and engineering constraints as well as stakeholder requirements. Beyond design, FUSE is now extending its applicability by targeting both pulse design and data analysis. All three of these rely on the same theory-based models, same actuators, and same diagnostic models, with the end goal of being machine-agnostic. These also use the same data structures and aim to be as fast as possible. The design goal of the FUSE framework is to enable such a unified digital twin vision.

To realize this vision, FUSE has been written from the ground up in the Julia high-level programming language, which is similar to Python but as fast as C since it is just-in-time compiled. All of FUSE is built around the ITER IMAS data ontology, which allows maximum interoperability with the rest of the fusion community. The framework does support a wide physics fidelity spectrum, trying to balance speed against fidelity, and ML is utilized when it will speed up real computational bottlenecks.

FUSE models span from the plasma core to the site boundary. Models include the plasma current evolution and a fixed boundary equilibrium solver, and the present work includes creating a free-boundary equilibrium solver that can couple with the coils and the passive conductive structures. From the transport point of view, FUSE can do flux matching using the latest transport models, and these core models are then coupled to the pedestal. From the design optimization and construction standpoint, FUSE can generate a 3D CAD using open CASCADE and even do costing, which is critical when machine optimization is important to reduce total cost. FUSE utilizes a multi-objective constraint optimization workflow, and an example was shown that compared positive and negative triangularity. Each full machine design takes on the order of minutes to run so that >10 000 cases of distinct machine designs can be run in a few hours on a small cluster, allowing the creation of the optimal target design.

A. Zhurba (Next Step Fusion) presented a talk entitled “Fusion Twin Platform: A SaaS Model for Deploying Digital Twins.” This presentation introduced how NSFsim31 and related components can be delivered as a software service, making digital twin development and accessibility more efficient for the fusion community. NSFsim is a free-boundary Grad–Shafranov and 1D transport solver for advanced tokamak simulations, plasma scenario development, device optimization, and ML applications. It simulates plasma evolution coupled with coil currents, induced currents in passive structures, and synthetic signals from magnetic diagnostics. NSFsim enables full scenario development, from breakdown to shutdown, predicting plasma parameter evolution, magnetic control requirements, and stability margins. It includes inverse solvers for fitting coil currents to desired plasma shapes and kinetic profiles and linear response models for vertical stability and density control. Each simulation relies on a digital replica built from the magnetic system's geometry and electrical properties, including poloidal coils, vacuum vessel, and limiter, all represented on customizable meshes. NSFsim is extensively used for both current tokamak operations and future tokamak design studies.32 The Fusion Twin Platform (FTP—https://fusiontwin.io/) is a free web-based tool that democratizes access to advanced tokamak simulations with NSFsim. It provides researchers, educators, and students with pre-built tokamak digital replicas for precise simulations, as well as fusion data visualization, sharing, and collaboration tools.

In preparation for future digital twins, the Next Step Fusion Toolkit was presented, built around NSFsim and to be expanded with TGLF,33 TRAVIS,34 and UEDGE for advanced transport, RF heating, current drive, and edge plasma simulations. Another piece under development is Plasma RL, the reinforcement learning-based plasma control toolkit for real-time control of plasma shape and position by processing raw magnetic diagnostics data and outputting commands to adjust currents in the tokamak's magnetic coils. Presently, this is machine- and pulse-agnostic, but ongoing work aims to extend the capability to be more machine-agnostic and to cover a wider range of operational regimes. One more component is Plasma Mind, a toolkit for training device-specific surrogate ML models based on historical datasets and generated synthetic data.

A new modern plasma control system for tokamaks, stellarators, and future power plants was presented that is built on the plasma state concept, a set of key parameters defining plasma behavior for real-time control. This system combines conventional and ML-based control methods within a simple yet flexible architecture with a clear separation of control layers to ensure reliability and efficiency.

Panel 1: The first panel on digital twins brought together speakers to explore the practical challenges and future prospects of digital twin technologies in fusion research. The panelists included Davis, Looby, Churchill, Kostuk, and Gibbs (Nvidia), with Nazikian (General Atomics) acting as moderator. The panel discussion underscored that digital twins represent both an essential evolution and a significant challenge in fusion research. Panelists agreed that digital twin frameworks must bridge the gap between high-fidelity, multi-physics simulations and the operational realities of fusion devices. A central need is to integrate disparate data sources—including CAD models, real-time sensor data, and advanced simulation outputs—into cohesive platforms that can inform and optimize facility design, operations, predictive maintenance, and real-time control.

Key challenges in realizing value from digital twin development were discussed. These include the scalability of models across the broad range of spatial and temporal scales inherent in fusion systems, as well as the integration of heterogeneous data types. Speakers highlighted that while exascale computing and AI/ML techniques (e.g., surrogate models, normalizing flows, and neural PDE operators) offer promising avenues to accelerate simulations, the tradeoffs between computational speed and model fidelity remain a critical issue for use in operational settings and in optimizing design. The need for robust uncertainty quantification in model validation was also emphasized as essential for achieving digital twin reliability, especially given the sparse measurement environments expected in future devices, which will require greater use of synthetic measurements than in present experiments.

Another important discussion point was the need for standardization—both in data formats and in interfacing protocols—to facilitate interoperability among national and international research teams and between public and private stakeholders. Panelists emphasized that collaborative efforts are required to develop open-source tools, data standards, and common interface protocols, including public–private partnerships, for collectively advancing digital twin capabilities.

The panel emphasized that the prospects for digital twin deployment in fusion are promising, with the potential to revolutionize reactor design, operational safety, and economic viability. In the long term, digital twins will enable “simulation-first” approaches that preemptively address emergent phenomena and reduce the risk of costly design and operational failures. However, to fully realize this potential, the fusion community must overcome significant technical hurdles related to multi-scale integration, real-time performance, and data management. Overall, the panel painted an optimistic picture—provided that focused investments and collaborative frameworks are established to address these challenges.

Panel 2: The second panel on digital twins built upon the morning's presentations and the first panel discussion to further explore the challenges and opportunities for digital twin technologies in fusion research. The panelists included Morishita, Kwon, de Baar, Meneghini, and Zhurba, with Nazikian acting as moderator. They discussed how digital twin technology can be integrated with advanced plasma control systems in fusion research to tackle both immediate- and long-term operational challenges, including issues associated with control in a potentially data-poor environment of future power plants. Additionally, the panel highlighted the vital importance of uncertainty quantification in digital twin development to clarify the reliability of model predictions, enabling researchers and operators to manage risk effectively and make informed design and operational decisions.

According to the panelists, several converging factors have made simulation environments sufficiently mature to serve as reliable test beds for advanced controller design and refinement. First, computing resources have expanded dramatically, enabling researchers to run large-scale, high-fidelity simulations not feasible even a few years ago. Second, the underlying physics models have evolved to capture an increasingly wide range of multi-physics phenomena, from core transport to boundary plasma interactions, giving researchers more confidence that simulated scenarios approximate real device behavior. Third, advances in data assimilation and machine learning now allow for fast, flexible surrogate models. These models can reproduce key plasma dynamics at a fraction of the computational cost, making iterative design and optimization of controllers both faster and more thorough.

One key discussion point was the anticipated lack of data in a next-step physical device such as a fusion pilot plant. This lack of data can arise from a smaller number of deployable diagnostics along with the reduced temporal or spatial resolution compared to today's experimental machines. Machine learning techniques will be critical to overcome this anticipated lack of data by generating reliable synthetic data based on integrating all available data from existing and future experiments into these models. A sensor failing can be another contributor to the reduction in available data, and any modern control system must be able to anticipate and deal with this loss. One method here is to produce models that can be substituted for a failed sensor; again, machine learning will play a critical role. How this is handled will also inform the overall feedback design. It will become important for simulations to be used to deduce what key measurements must be included in any future machine design. Care must be taken to run simulations that cover all possible phase space and to include noise in the simulation inputs that will, in the end, allow the control system to better handle uncertainties, including sensor degradation and failure, in the measured data. Clearly, in areas where models or simulations are lacking in capability, more emphasis will need to be placed on obtaining those measurements. The intelligent digital twin will be critical for this overall design methodology.

High-fidelity integrated modeling is emerging as a critical pillar for building and sustaining digital twins in fusion energy. By capturing multi-physics and multi-scale phenomena with ever-increasing fidelity, these frameworks provide the predictive power that a digital twin demands. Advances in data science, like surrogate models, allow the incorporation of these AI/ML models into real-time or near-real-time workflows. Incorporating these integrated modeling frameworks into iDTs moves beyond just standalone codes. Instead, they become core components of an evolving digital twin ecosystem, one that continuously assimilates data, refines models, and enables predictive control for next-generation fusion devices.

J. Citrin (Google DeepMind) presented a talk entitled “TORAX: A Fast and Differentiable Tokamak Core Transport Simulator in JAX.”35 This presentation introduced an open-source simulator designed to deliver fast, accurate core-transport modeling for pulse planning and optimization, thereby unlocking broad capabilities for controller design and advanced surrogate physics. TORAX is written in Python using JAX,36 and solves coupled time-dependent 1D partial differential equations (PDEs) for tokamak core ion and electron heat transport, particle transport, and current diffusion. JAX's just-in-time (JIT) compilation provides fast computation while maintaining Python's ease of use and extensibility. JAX auto-differentiability enables gradient-based trajectory sensitivity analysis for scenario optimization and controller design without time-consuming manual Jacobian calculations. JAX's inherent support for neural network development and inference facilitates coupling ML surrogates of constituent physics models, key for fast and accurate simulation. TORAX's Python-based modular design offers flexibility in incorporating new physics models and ML surrogates, another hallmark of iDTs, which must continually evolve as new data and models become available.

TORAX provides two main solver methods. A Newton–Raphson solver, utilizing JAX auto-diff capabilities, iteratively solves the nonlinear PDE through root finding of the PDE residual to a desired tolerance, with adaptive time-stepping for robustness. Additionally, a predictor–corrector method is available, which does not require auto-diff and carries out fixed-point iterations. The predictor–corrector option enables TORAX to also operate as a traditional integrated modeling code, with flexible coupling to standard physics models. This allows TORAX to validate ML surrogates against their ground truth within the same framework and also to generate ML surrogate training sets based on these ground truth models, increasing the velocity of model development and verification. Numerous ML surrogate and analytical physics models are available, including current, heat, and particle sources and sinks, Ohmic power, bootstrap current, and ECRH/ECCD/ICRH, as well as flexible coupling to pre-calculated series of various equilibrium code outputs.

Code verification was obtained by comparison with the established RAPTOR code37 on ITER-like and SPARC scenarios, with good kinetic profile agreement achieved (<4%) even when applying strongly nonlinear transport models. A small degree of disagreement is expected due to differences in discretization methodology. Due to the JIT compilation, TORAX runtime is approximately 5× times faster than RAPTOR for the cases compared.

Looking forward, TORAX development aims to extend physics capabilities and begin supporting applications for pulse design and optimization workflows, such as in the SPARC Pulse Planner, among others. TORAX is a natural framework for integrating multi-physics ML surrogates, e.g., for turbulence, neoclassical effects, sources, pedestal, and edge physics. Furthermore, coupling to IMAS data structures is planned, enhancing portability and modularity with various workflows. TORAX is an open-source tool and aims to be a foundational component of wider workflows built by the wider community for future tokamak integrated simulations.

J. M. Park (Oak Ridge National Laboratory) presented a talk entitled “IPS-FASTRAN: An HPC-driven Integrated Plasma Simulator for End-to-End Pulse Simulations.” This presentation highlighted the framework's capability to model the entire plasma discharge—from ramp-up to termination—offering a robust platform for advanced scenario development and fusion pilot plant design. By leveraging HPC resources, IPS-FASTRAN38 can run large ensembles of scenario simulations, perform uncertainty quantification, and embed AI surrogate models for a more rapid exploration of design options. It supports multi-level integration, from core transport through pedestal and SOL modeling, enabling self-consistent scenario optimization that is a step closer to the digital twin's real-time predictive and control-oriented capability.

The presentation included the utilization of IPS-FASTRAN for extending DIII-D Advanced Tokamak (AT) scenarios to KSTAR, guiding the DIII-D heating and current drive upgrade, and extrapolation of the DIII-D steady-state scenario to ITER. Theory-based reactor optimization was also presented, including a promising path to a compact advanced tokamak that can generate net electricity, the importance of a high magnetic field and broadening of current profile to increase fusion gain in K-DEMO, and an optimized configuration for EXCITE.

Three main areas were presented for the utilization of IPS-FASTRAN as a unified framework that connects high-fidelity integrated modeling and also a large ensemble of simulations for a new application of integrated modeling. The first was TokSimulator, a high-fidelity plasma pulse simulator that successfully reproduced DIII-D pulses from current ramp-up to pulse termination. AI/ML acceleration via surrogate models of TGLF and EPED39 was critical to this success. The second was the modeling of the core-edge pedestal scrape-off layer. Results from DIII-D modeling matched experimental data (ne, Te, Ti) reasonably well from the core to the diverter, and this will enable the ability to optimize discharges self-consistently for core performance and also heat exhaust. The time to solution for this workflow was significantly accelerated by constructing surrogate models for key components. The third area was the theory-based design of a fusion reactor (TokDesigner) utilizing IPS-FASTRAN. This approach utilized a large ensemble of high-fidelity IPS-FASTRAN simulations to create a global parameter surrogate model, as a function of design and operation parameters like size, plasma shape, injection power, and density. These surrogate models have been utilized for device optimization over different scales, including DIII-D heating and current drive upgrade, scenario development for ITER, DIII-D, and KSTAR, as well as reactor design for FNSF and EXCITE.

J. Merson (Rensselaer Polytechnic Institute) presented a talk entitled “Geometry-Aware Coupling of Multimodel Simulations and Data for Digital Twins through the Parallel Coupler for Multimodel Simulations (PCMS).” This presentation highlighted an approach for bridging data from diverse analysis codes in HPC environments, complete with coordinate transformations and parallel control utilities, to enable advanced digital twin workflows. One of the challenges associated with digital twins is that they must be able to support the full range of domain descriptions, from physics geometry, to parametric design CAD, to fully detailed manufacturing CAD of complex structures such as divertors and RF antennas. Additionally, these representations should be flexible enough to incorporate feedback from as-built or post-shot 3D measurements. To support these needs, the iDT will need an abstract representation of the domain information that can support in situ geometry interaction and modification and the ability to combine and modify the range of domains in a single geometric model. J. Merson presented tools that can fulfill these needs through a topology-based domain infrastructure that can unite the various geometric domains. By making use of topological information, procedures have been developed that can automatically eliminate small features that are not relevant for simulations. Furthermore, geometry and meshing tools have been produced as well as the ability to fix CAD model errors such that automatic meshing tools can be applied as required.40 Key features include working directly with physics and engineering geometry (e.g., GEQDSK, VMEC, and Parasolid) and special purpose meshing algorithms to support unique fusion requirements such as field following meshes.

The software suite Parallel Coupler for Multimodel Simulations (PCMS) has also been developed that allow for parallel control and field transfer between applications that have different parallel partitioning schemes.41 This can be used to support coupling with no modifications of applications' data structures or numerical methods. Such a capability is extremely appealing for fusion simulation tools that have had considerable development by highly specialized computational physicists, where rewriting codes from scratch within a unified framework is not feasible. PCMS also supports field transfer operations that can support physics requirements such as the conservation of physical quantities. These operations are being extended to support five- and six-dimensional data, which is common in fusion applications.

Effort has been put forth to integrate PCMS into a variety of software libraries commonly used within the fusion community. One example is the integration into Fusion-IO, a set of libraries that provide a common physics-oriented interface to data generated by various fusion codes. To support in-memory coupling, PCMS makes use of the Adaptable Input/Output System Version 2 (ADIOS 2) for rapid parallel data transport. By using this method, PCMS has shown scaling on Leadership Computing-scale systems such as Frontier and Perlmutter. Finally, PCMS is developing tools to integrate simulation data with experimental diagnostics through a unified interface. This effort relies on a common schema built on top of IMAS and OMAS (Open Source MDSplus/IMAS Access System) and ADIOS2 that is under development by ORNL.

M. Zhao (Lawrence Livermore National Laboratory) presented a talk entitled “2D UEDGE Database for KSTAR Detachment Control.” This presentation highlighted the development of a comprehensive 2D UEDGE simulation database that explores key control parameters for KSTAR detachment control and lays the groundwork for machine learning surrogate models to enhance real-time plasma performance. The current detachment control algorithm relies on controlling the attachment fraction,42 defined as the ratio of ion saturation current measured on the outer target plates to the currents calculated by the two-point model. However, no further information than degree of detachment can be obtained using the algorithm due to the limitation of the two-point model. The goal is to replace the two-point model with an ML surrogate model trained on UEDGE simulations.

The UEDGE database was built using a base case that captures the essential physics of detachment, with a balance between accuracy and computational efficiency. The base case was constructed using a real equilibrium from the KSTAR campaign with a carbon divertor, and the grid mesh resolution was optimized (poloidal 64, radial 20) to achieve a reasonable computational time. Cross-field drifts were also found to play an important role in detachment modeling and were included in the UEDGE modeling. For impurity modeling, it was found that the multi-charged model took ∼20× longer than the fixed-fraction model yet yielded similar results. Thus, the fixed-fraction model was used.

The database was built by varying five control parameters, including equilibrium, input power, core boundary density, carbon fraction, and anomalous diffusivity. The resulting database consists of approximately 78 000 converged cases, which is deemed sufficient for building a surrogate model. The resulting surrogate model outputs target profiles of electron density, electron/ion temperature, saturation current, 2D radiation profiles, and heat deposition along both target plates.

The presentation highlighted some key features, including the scaling of outer mid-plane electron temperature with power inputs and the concentration of detachment onset cases around a specific outer strike point temperature of ∼3 eV. In addition, the results demonstrated that the radiation front moves away from the outer target plate as the plasma detaches. The results demonstrate the potential of the database to provide valuable insights into the physics of detachment and to support the development of improved control algorithms.43 Future work includes utilizing the framework to build a database and associated surrogate models for the current KSTAR campaign with a tungsten divertor.

The panel on high-fidelity integrated modeling frameworks allowed for a more detailed discussion on the role and challenges of utilizing HPC in digital twin technologies in fusion research. The panelists included Citrin, Park, Merson, and Zhao, with Gibbs acting as moderator. They discussed issues associated with the complexity and difficulty of coupling multi-physics from disparate codes into a unified framework that could be built into an iDT.

There has been a significant effort over the last decade within individual simulation codes to look for speed gains in the era of Moore's law nearing its end. However, the future really is dependent on coupling applications to inform a better overall understanding of plasma behavior. The difficulty arises since most simulation software written over the past many decades was not built to be coupled to external applications.

One approach is, of course, to rewrite all legacy simulation software in new modern languages with the intention of architecting in the great flexibility of being able to be coupled to external software, whether that be other simulation codes or databases. Unfortunately, this is a herculean effort requiring resources that do not exist. Certainly, new software should be written with this mindset, but that will only get the community partially there. So, the question remains, how to reuse older codes in a coupled multi-physics environment.

It is critical to understand a code's existing data structures and coordinate systems, not to change them, but to know how to translate them for the required software coupling. This understanding comes from documentation, whether that be within the software or some external text. However, a small amount of time by software authors to document can empower a larger number of people to use their existing software in a coupled multi-physics problem. Associated with this activity needs to be a community consensus on shared terminology that has a very specific meaning. It sounds simple, but if a coordinate system could be either right- or left-handed, that makes a very big difference. Thus, some time spent on terminology definition and some documentation could go a long way as a force multiplier as the community looks to create a modeling and simulation environment necessary for an iDT.

While traditional integrated modeling codes remain indispensable, their computational cost and complexity often prohibit real-time or ensemble-based workflows. Surrogate models, particularly those constructed using AI and ML techniques, offer a strategy to compress complex physics into fast evaluation times.

K. Humbird for B. Spears (Lawrence Livermore National Laboratory) opened the AI/ML session with a talk entitled “Frontier AI for Science, Security, and Technology (FASST): Transformative AI for the Fusion Community and the Nation.” This presentation outlined how FASST will harness exascale computing, advanced simulations, and a continuous stream of high-quality data to build digital twins and accelerate inertial confinement fusion (ICF) research. The broad aim is to unify exascale computing, high-fidelity simulation tools, and curated scientific data resources to build new frontier-scale AI models and systems. By continuously coupling large-scale simulation outputs with real-world experiments, FASST will enable adaptive digital twins that refine themselves in real time. A program of this magnitude would significantly advance the creation of intelligent digital twins for both inertial and magnetic confinement.

The presentation laid out a technical roadmap built on four core DOE capabilities that are critical for transforming ICF research: exascale computing (exemplified by the forthcoming El Capitan system with over 30 000 GPUs), advanced high-fidelity simulation tools that capture the complex physics, cutting-edge machine learning frameworks for integrating diverse data sources, and access to unique experimental facilities like the National Ignition Facility (NIF), which provide essential high-quality data.

An example was presented of a digital twin for advanced manufacturing, specifically 3D printers. This twin is used to optimize printing strategies to create materials and components that have very specific and complex properties. A big element of this work is the investigation of data logistics, including how data are captured, stored, reduced, and transferred between an HPC resource and the physical laboratory. At the intermediate scale, the ELI laser facility44 has successfully deployed ML models to optimize experimental operation. As more experimental data are collected, the ML model gets better and drives the results to highly optimized solutions. A similar result was presented for the GALADRIEL laser facility.45 Finally, at a much larger scale, the presentation discussed the creation of a digital twin for the Scorpius pulsed power accelerator46 with the end goal that the DT is so sophisticated that it will be used to optimize experiments.

To develop meaningful digital twins as presented, the U.S. requires a national AI program with contributions from a broad consortium of partners. FAAST is aiming to provide that umbrella under which to do this work.

Y. Ghai (Oak Ridge National Laboratory) followed with a talk entitled “Surrogate Model of Energetic Particle Transport in Reactor-Relevant Fusion Devices.” This work focuses on building a fast, accurate surrogate model for alpha-particle transport, crucial for predicting performance in next-generation fusion devices. Traditional high-fidelity simulation codes capture energetic particle phenomena but are computationally prohibitive for iterative reactor design and real-time control. Focusing on an ITER steady‐state scenario, the FAR3D gyrofluid code47 was used to simulate the nonlinear transport of 1 MeV post-slowing down energetic alpha particles and ∼300 keV deuterium neutral beam ions in the presence of unstable Alfvén eigenmodes. These modes can induce significant particle losses, potentially damaging the first wall.48 The resulting simulations of energetic alpha-particle transport provided the necessary data for surrogate model development.

The simulations tracked multiple toroidal mode number Alfvén eigenmodes in the ITER scenario that initially grew linearly before saturating due to the influence of zonal flows and nonlinear toroidal mode couplings. From the saturated phase, the team extracted data at multiple time slices and radial points, amassing approximately 6000 data points. Detailed correlation analyses revealed that the deuterium beam ion and alpha-particle density gradient were the dominant factors affecting the energetic ion particle fluxes. These simulations were used to train a surrogate model using a feedforward neural network. They experimented with different architectures, including networks with four to five hidden layers and hundreds of neurons per layer. The resulting surrogate model successfully reproduced the nonlinear transport flux trends observed in FAR3D simulations with robust R2 scores, all while reducing computation times by orders of magnitude. This work paves the way for real-time control applications and iterative design processes in fusion reactor development.

Future work plans to broaden the surrogate's parameter space by incorporating more ITER‐like regimes and further validating the model against extensive simulation scans. The ultimate goal is to integrate the model into comprehensive frameworks like IPS-FASTRAN,38 enabling self-consistent burn performance calculations and supporting digital twin strategies for next-generation fusion pilot plants.

E. Kolemen (Princeton University) presented a talk entitled “AI for Fusion Diagnostics, Control, and Science Discovery.” This presentation highlighted how advanced machine learning methods can robustly integrate experimental measurements with simulation data to drive fusion reactor performance. Drawing on advanced neural network architectures, the talk demonstrated how AI can serve as the critical tool to synthesize diverse and often imperfect measurements into accurate digital representations of plasma states, an essential capability for real‐time control in harsh fusion environments.

A central theme was the development of robust reconstruction algorithms that overcome sensor failures and noisy data. The presentation highlighted a neural-network-based equilibrium reconstruction called rt-CAKENN developed using DIII-D data that reliably recover plasma shape and kinetic profiles even when individual diagnostics, like magnetic pickup coils or Thomson scattering, underperform. By teaching the models to learn from both high-quality and degraded inputs, the approach not only ensures accurate state estimation but also provides resilience critical for maintaining plasma control in fusion environments.

The presentation further showcased surrogate models that combine experimental and simulation data, enabling rapid predictions of plasma evolution. These fast inference models, capable of saving precious milliseconds in control cycles, were demonstrated to successfully suppress disruptive phenomena (such as edge localized modes) in experiments on platforms like KSTAR and DIII-D.49,50 On DIII‐D, a reinforcement learning-based algorithm was implemented to autonomously optimize actuator settings (such as heating power), effectively maximizing fusion performance while staying below disruption thresholds.51 By merging data‐driven methods with established physics codes, these hybrid control strategies successfully threaded the needle between achieving high fusion gain and avoiding instabilities, demonstrating a clear pathway toward robust, real‐time control of advanced fusion experiments.

Looking ahead, the presentation emphasized a clear path forward: integrating these AI-driven surrogate models into comprehensive digital twin frameworks that continuously refine themselves through closed-loop feedback from both simulations and real-time diagnostic data. Furthermore, the presentation stressed the importance of cross-validating multiple diagnostic sources and enhancing model interpretability to ensure that digital twins remain adaptive and reliable under evolving experimental conditions, thus paving the way for autonomous, high-performance fusion energy.

A. Jalalvand (Princeton University) presented a talk entitled “Enhancing Temporal Resolution in Fusion Diagnostics through Multimodal Neural Networks.” This presentation described an innovative approach to overcoming the inherent limitations of fusion diagnostics by leveraging advanced machine learning. The presentation highlighted key diagnostics like Thomson scattering provide high-quality measurements yet suffer from low temporal resolution, often resulting in critical transient plasma events being missed. To address this challenge, the presentation introduced a strategy to reconstruct and enhance these sparse signals by reconstructing them from higher-frequency data from complementary diagnostic tools.

At the heart of this approach lies a multimodal neural network framework named Diag2Diag.52 This system is designed to ingest data from a diverse set of sources, including electron cyclotron emission, interferometry, magnetic probes, and charge exchange recombination, and to exploit the inherent correlations among these measurements. By training on datasets where complete Thomson scattering information is available, the network learns to generate synthetic signals at resolutions reaching up to 500 kHz. In doing so, it effectively fills in the gaps where direct measurements are incomplete or absent.

The synthetic, super-resolved outputs not only closely matched the available real Thomson measurements but also revealed fine-scale plasma phenomena such as edge localized modes and resonant magnetic perturbation-induced island effects, details that are typically obscured by lower sampling rates and thus studied by simulations. These enhanced diagnostic insights offer a deeper understanding of rapid transient events governing plasma behavior, which is essential for developing more robust plasma control strategies.

Looking ahead, the presentation outlined several promising directions for future study. One potential extension is to apply similar techniques to improve spatial resolution, thereby complementing the temporal enhancements already achieved. Furthermore, integrating these advanced synthetic diagnostic reconstructions into comprehensive digital twin frameworks could enable real-time feedback and control mechanisms in fusion reactors. Such integration holds the potential not only to optimize reactor performance but also to pave the way for a new era of predictive, AI-driven fusion science, ultimately accelerating the transition toward autonomous operation in next-generation fusion devices.

S. Wiesen (Dutch Institute for Fundamental Energy Research—DIFFER) presented a talk entitled “Towards Validated Higher-Fidelity AI Models in Fusion Exhaust.” This presentation showcased the development and application of machine learning methods to create a surrogate model predictor for power and plasma exhaust, as well as time-dependent surrogate models for exhaust in tokamaks. The importance of reducing simulation times, which can take weeks or months for traditional codes like SOLPS-ITER,53 is the motivation for investigating ML methods. Specifically for this presentation, neural networks were used to create surrogate models that can predict plasma behavior in a fraction of the time.

Two main approaches were presented: a low-fidelity model SOLPS-NN54 based on SOLPS-ITER that utilizes a deep learning neural network approach and a higher-fidelity model using transfer learning to improve the accuracy of the predictions. The low-fidelity model was able to capture the general trends of the plasma behavior but with some errors. The higher-fidelity model, on the other hand, showed improved accuracy, particularly when compared to the ITER design divertor database.

The presentation also touched on the development of a 1D dynamic model, DIV1D-NN,55 which uses a neural network to predict the time evolution of the plasma profile. The presented work included tests of various Partial Differential Equation-Neural Net (PDE-NN) approaches to push snapshots of profiles in the time domain. The results showed that the neural network was able to accurately capture the behavior of the plasma, including hysteresis effects. This approach was able to be faster than real-time in terms of predicting a 2 ms time evolution of the 1D profile. Therefore, this has the potential to be used for real-time control and prediction of plasma behavior, with the goal of creating a digital twin for fusion exhaust.

Looking to future work, the aim is to increase the physics fidelity of SOLPS-NN and DIV1D-NN including the testing of active learning techniques. The creation of a fast learned dynamical SOL model will be investigated by combining dynamical 1D and spatial 2D information. Experimental data from ASDEX-U will be used to both validate and improve the models. The ultimate goal is to create a model predictive based control scheme that can be used to optimize plasma performance and achieve better confinement. The presentation demonstrated the useful applications for ML and AI in fusion exhaust modeling schemes, enabling an unprecedented combination of both fast and accurate simulation. The applied methods and developed tools like SOLPS-NN will also be used for fast predictions of operational space for SPARC.

Z. Li (California Institute of Technology) presented a talk entitled “Plasma Surrogate Modelling Using Fourier Neural Operators.” This presentation described results demonstrating accurate predictions of plasma evolution for both simulations and experiments using Fourier neural operators (FNOs). Modeling plasma evolution using traditional numerical solvers is often computationally expensive and thus motivated this study of utilizing machine learning-based surrogate models. Traditional neural networks are not well suited for scientific computing problems, where inputs are continuous functions rather than discrete vectors. FNO addresses this limitation by using linear integral operators, which can be implemented using fast Fourier transform. It learns efficient spatial and spectral representation, which can be scaled to high-resolution chaotic systems.56 

The use of FNOs to model plasma MHD was investigated by using the JOREK nonlinear MHD simulation code to create a training dataset. The FNO model was trained on a dataset of 2000 simulations with varying initial conditions, each composed of 50 time steps. The density, electrical potential, and temperature fields were all modeled using FNO. Overall, the FNO model showed a significant speedup when compared to JOREK while accurately capturing the temporal evolution. These FNO-based machine learning models are very fast, being able to make an inference in ∼10 ms, and therefore can also be used for real-time plasma control.

Another key advantage of the FNO approach is its ability to handle different resolutions and domain sizes. The model can be trained on a coarse grid and deployed on a finer grid (super-resolution), making it particularly useful for working with mixed datasets. The example presented demonstrated this by training the model on a 100 × 100 resolution grid and deploying it on a 500 × 500 resolution grid, with comparable results.

The model was further tested on experimental camera data from the MAST device, with both a radial main plasma field of view and a view of the divertor region.57 The real-time forecasting of the fast camera images was able to accurately predict the plasma evolution and the L–H transition. Ongoing work included the development of methodologies for complex geometries, scale consistency, and attention-based neural operators. The FNO offers a viable alternative for surrogate modeling as it is quick to train and infer, and requires fewer data points while being able to do zero-shot super-resolution and get high-fidelity solutions.

The panel on AI/ML provided an opportunity for the speakers to discuss in greater detail the challenges and opportunities for AI/ML techniques to benefit digital twin technologies in fusion research. The panelists included Spears, Kolemen, Jalalvand, Wiesen, and Li, with Gibbs acting as moderator. They discussed how advanced computational methods, such as high-fidelity simulations and AI-driven surrogate models, could be used by digital twins to provide actionable insights, operational support, and real-time control solutions.

The panel emphasized that a digital twin must unify large-scale simulation results with real-time diagnostic data to furnish a statistical characterization of both the system's current state and its dynamic evolution. For example, when experimental measurements differ from simulation predictions, the digital twin should be capable of exploring multiple explanations or outcomes rather than forcing a single best guess. This requirement underscores the importance of probabilistic reasoning and uncertainty quantification, enabling the digital twin to refine its predictions in real time, handle unexpected variations in measured data, and guide operational decisions with greater confidence.

The discussion also delved into the practical challenges of sensor failure and noisy data. Several speakers stressed that in a data-poor environment typical of next-generation fusion devices, robust surrogate models must be developed to reconstruct missing or degraded diagnostics. The panel pointed out that machine learning approaches, such as the use of multimodal neural networks, were demonstrating efficacy in reconstructing critical measurements, even when key diagnostics failed.

The panel also discussed the challenge of balancing computational speed with the physical accuracy needed for predictive control. One promising approach, based on Fourier neural operators (FNOs), demonstrated the capability to accelerate the modeling of complex system dynamics by orders of magnitude while preserving physics fidelity. The development of neural operators is seen as a critical tool in the AI/ML toolbox for realizing predictive control. The panelists concluded that the complexity and multi-scale nature of fusion phenomena demand a hybrid framework for control, one that leverages both data-driven and physics-model approaches.

Overall, the panel painted an optimistic yet measured picture. While fully autonomous AI-controlled fusion devices may still be on the horizon, the roadmap is clear: strengthen the linkage between high-fidelity simulations and experimental data, incorporate robust uncertainty quantification, and foster collaborative partnerships across national labs, industry, and academia to develop these advanced control capabilities. This multi-pronged approach is viewed as essential for advancing digital twins that can effectively navigate the complexities of fusion science for design, operations, and control.

The mini-conference presentations collectively show that the fusion community is well on its way to realizing intelligent digital twins. The community has advanced integrated modeling frameworks, HPC capabilities for large-ensemble simulation and uncertainty quantification, ML-driven surrogates for real-time inference, data assimilation for control, and geometry-aware coupling tools that unify disparate models into coherent workflows.

While the community is converging on powerful solutions, notable challenges persist:

  • Computational scale and latency: Running full 3D nonlinear MHD or gyrokinetic turbulence codes at millisecond time scales for real-time operations is still out of reach. Surrogates and hierarchical modeling partly solve this, but further algorithmic advances, GPU acceleration, and specialized hardware might be needed.

  • Legacy code integration: Many fusion codes were developed incrementally over decades, lacking modular Application Programming Interface (APIs) or standard data structures. Tools like PCMS help, but a cultural shift is needed: code authors should expose consistent APIs, unify coordinate definitions, and provide test cases for coupling.

  • Robustness to diagnostic and actuator failures: Intelligent digital twins must handle real-world conditions, including failed diagnostics, drifting calibration, or actuator limits. AI can partially infer missing signals, and uncertainty estimates can indicate when trust in predictions is low.

  • Comprehensive validation campaigns: Achieving community trust requires systematic validation campaigns, cross-machine benchmarks, and publicly available validation datasets. Collaborative efforts involving industry, national laboratories, and universities spanning national and international organizations are needed, with standardized validation steps and metrics.

  • Evolving physics models and upgrades: As technology and science progress, new materials, diagnostics, and actuators become available. The iDT must quickly incorporate updated physics models or refine surrogates. Continuous integration and DevOps-like workflows for scientific codes can streamline this.

Despite these challenges, the opportunities are immense. Intelligent digital twins can serve as the cognitive scaffold on which future fusion operations are built. By partnering HPC/AI experts with physicists and engineers, establishing community standards, and continuously refining models through data assimilation, digital twins can transform fusion development timelines and learning rates. As fusion enters the burning plasma era, the complexity of device operations, control challenges, and engineering constraints escalates. Intelligent digital twins offer a way to cope with and even leverage this complexity. Instead of painstakingly planning each plasma pulse, operators can rely on an iDT to provide scenario recommendations, predict potential problems, and propose control actions. Instead of long, empirical design cycles for future reactors, engineers can run optimization loops on iDT platforms, drastically shortening the path to cost-effective, high-performing solutions.

What became self-evident from the broad range of mini-conference presentations is that no one group or organization possesses the skills required to create an iDT on their own. The successful creation of fusion iDTs requires a collaborative effort to accelerate deployment and validation that spans industry, national laboratories, and universities.

The path forward involves substantial research and coordination:

  • Standardization of interfaces, data formats, and coordinate conventions: So that adding new physics modules, codes, or surrogates is straightforward.

  • Comprehensive validation efforts: Establish benchmarks, share open datasets, and run multi-code comparison studies to build community confidence.

  • Robust uncertainty quantification and trust-building: Integrate Bayesian inference, simulation-based inference, and active learning workflows so that every prediction comes with an uncertainty estimate.

  • Close integration with control and experiment planning tools: Digital twins should not be restricted to offline scenarios or design studies; they must be integrated into real-time control loops and experiment planning systems that support operations.

  • Embrace large-scale AI initiatives: Leverage programs like FASST and HPC-based AI platforms to develop massive foundation models for fusion, enabling transfer learning across different devices and scenarios.

In conclusion, intelligent digital twins represent a paradigm shift in how we approach fusion research, operations, and engineering. By unifying HPC, AI, integrated modeling, data assimilation, and advanced control, iDTs are set to become indispensable tools as we move inexorably toward the long-sought goal of commercial fusion energy. The urgency to accelerate the development and deployment of intelligent digital twins in fusion research cannot be overstated. As fusion technology reaches new milestones and the capabilities of AI and high-performance computing rapidly evolve, any delay in advancing iDT frameworks risks falling behind global innovation trajectories and missing critical opportunities for reactor optimization and risk mitigation. Rapid development is imperative to ensure that experimental insights from existing facilities are quickly translated into design and operational improvements, ultimately driving the transition to commercial fusion energy.

The authors would like to thank the speakers for their thoughtful and valuable contributions: R. M. Churchill (Princeton Plasma Physics Laboratory), J. Citrin (Google DeepMind), A. Davis (United Kingdom Atomic Energy Authority—UKAEA), M. de Baar (Dutch Institute for Fundamental Energy Research—DIFFER), Y. Ghai (Oak Ridge National Laboratory), K. Humbird (Lawrence Livermore National Laboratory), A. Jalalvand (Princeton University), E. Kolemen (Princeton University), M. Kostuk (General Atomics), J. M. Kwon (Korea Institute of Fusion Energy), Z. Li (California Institute of Technology), T. Looby (Commonwealth Fusion Systems), O. Meneghini (General Atomics), J. Merson (Rensselaer Polytechnic Institute), Y. Morishita (Kyoto University), J. M. Park (Oak Ridge National Laboratory), S. Wiesen (Dutch Institute for Fundamental Energy Research—DIFFER), M. Zhao (Lawrence Livermore National Laboratory), and A. Zhurba (Next Step Fusion). The authors would also like to thank the mini-conference attendees for contributing to stimulating discussions and the APS DPP for allowing the use of recordings to prepare this overview. The authors acknowledge support from various national laboratories, research institutions, and industries pioneering the development of digital twins. Finally, the first two authors thank General Atomics for their financial support in preparing this manuscript.

The authors have no conflicts to disclose.

D. P. Schissel: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal). R. M. Nazikian: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal). T. Gibbs: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
A. B.
Zylstra
,
O. A.
Hurricane
,
D. A.
Callahan
,
A. L.
Kritcher
,
J. E.
Ralph
,
H. F.
Robey
,
J. S.
Ross
,
C. V.
Young
,
K. L.
Baker
,
D. T.
Casey
et al, “
Burning plasma achieved in inertial fusion
,”
Nature
601
,
542
548
(
2022
).
2.
K.
Tischler
, see https://euro-fusion.org/eurofusion-news/dte3record/ for “
Breaking new ground: JET Tokamak's latest fusion energy record shows mastery of fusion processes
” (
EUROfusion
, February 8,
2024
).
3.
United States Government Accountability Office
, see https://www.gao.gov/assets/gao-23-105813.pdf for “
Technology assessment fusion energy
” (
2022
).
4.
Committee on a Strategic Plan for U.S. Burning Plasma Research
, see https://nap.nationalacademies.org/download/25331 for “
Final report
” (
2019
).
5.
M.
Grieves
and
J.
Vickers
, “
Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems
,” in
Transdisciplinary Perspectives on Complex Systems
(
Springer
,
2017
).
6.
S.
Boschert
and
R.
Rosen
, “
Digital twin—The simulation aspect
,” in
Mechatronic Futures
(
Springer
,
2016
), pp.
59
74
.
7.
E. J.
Tuegel
,
A. R.
Ingraffea
,
T. G.
Eason
, and
S. M.
Spottswood
, “
Reengineering aircraft structural life prediction using a digital twin
,”
Int. J. Aerosp. Eng.
2011
,
154798
.
8.
A.
Fuller
,
Z.
Fan
,
C.
Day
, and
C.
Barlow
, “
Digital twin: Enabling technologies, challenges and open research
,”
IEEE Access
8
,
108952
108971
(
2020
).
9.
N.
Crespi
,
A. T.
Drobot
, and
R.
Minerva
,
The Digital Twin
(
Springer International Publishing
,
2023
).
10.
See https://www.energy.gov/articles/doe-announces-new-decadal-fusion-energy-strategy for “
Developing a bold vision for commercial fusion energy
.”.
11.
See https://www.energy.gov/fasst for “
Frontiers in Artificial Intelligence for Science, Technology, Security and Technology (FASST)
.”
12.
W. M. E.
Ellis
,
L.
Reali
,
A.
Davis
,
H. M.
Brooks
,
I.
Katramados
,
A. J.
Thornton
,
R. J.
Akers
, and
S. L.
Dudarev
, “
Mechanical model for a full fusion tokamak enabled by supercomputing
,”
Nucl. Fusion
65
,
036033
(
2025
).
13.
T.
Looby
,
M.
Reinke
,
A.
Wingen
,
J.
Menard
,
S.
Gerhardt
,
T.
Gray
,
D.
Donovan
,
E.
Unterberg
,
J.
Klabacha
, and
M.
Messineo
, “
A software package for plasma-facing component analysis and design: HEAT
,”
Fusion Sci. Technol.
78
(
1
),
10
27
(
2022
).
14.
M.
Landreman
,
J. Y.
Choi
,
C.
Alves
,
P.
Balaprakash
,
R. M.
Churchill
,
R.
Conlin
, and
G.
Roberg-Clark
, “
How does ion temperature gradient turbulence depend on magnetic geometry: Insights from data and machine learning
,” arXiv:2502.11657 (
2025
).
15.
C. S.
Furia
and
R. M.
Churchill
, “
Normalizing flows for likelihood-free inference with fusion simulations
,”
Plasma Phys. Controlled Fusion
64
,
104003
(
2022
).
16.
D. C.
Pace
,
M. A.
Van Zeeland
,
B.
Fishler
, and
C.
Murphy
, “
Consideration of neutral beam loss in the design of a tokamak helicon antenna
,”
Fusion Eng. Des.
112
,
14
20
(
2016
).
17.
T. B.
Amara
,
S. P.
Smith
,
Z. A.
Xing
,
S. S.
Denk
,
A.
Deshpande
,
A. O.
Nelson
,
C.
Simpson
,
E. W.
DeShazer
,
T. F.
Neiser
,
O.
Antepara
et al, “
Accelerating discoveries at DIII-D with the integrated research infrastructure
,”
Front. Phys.
12
,
1524041
(
2025
).
18.
Y.
Morishita
,
S.
Murakami
,
M.
Yokoyama
, and
G.
Ueno
, “
ASTI: Data assimilation system for particle and heat transport in toroidal plasmas
,”
Comput. Phys. Commun.
274
,
108287
(
2022
).
19.
Y.
Morishita
,
S.
Murakami
,
M.
Yokoyama
, and
G.
Ueno
, “
Data assimilation and control system for adaptive model predictive control
,”
J. Comput. Sci.
72
,
102079
(
2023
).
20.
Y.
Morishita
,
S.
Murakami
,
N.
Kenmochi
,
H.
Funaba
,
I.
Yamada
,
Y.
Mizuno
,
K.
Nagahara
,
H.
Nuga
,
R.
Seki
,
M.
Yokoyama
et al, “
First application of data assimilation-based control to fusion plasma
,”
Sci. Rep.
14
,
137
(
2024
).
21.
J. M.
Kwon
,
H.
Choi
,
J. S.
Ki
,
S. Y.
Park
,
S. H.
Park
,
Y. J.
Kim
,
H.
Cho
,
S.
Kim
,
H. S.
Chae
,
K. S.
Lee
et al, “
Development of a virtual tokamak platform
,”
Fusion Eng. Des.
184
,
113281
(
2022
).
22.
J. M.
Kwon
,
C.
Lee
,
T.
Rhee
,
M.
Woo
,
J.
Park
,
C.
Lee
,
D.
Kim
,
J.
Ki
,
H.
Choi
,
C.
Park
et al, “
Progress in digital twin development of virtual tokamak platform
,”
IEEE Trans. Plasma Sci.
52
,
3910
(
2024
).
23.
T.
Moon
,
T.
Rhee
,
J. M.
Kwon
, and
E.
Yoon
, “
Development of novel collision detection algorithms for the estimation of fast ion losses in tokamak fusion device
,”
Comput. Phys. Commun.
309
,
109490
(
2025
).
24.
See https://www.opencascade.com/ for information on the Open Cascade Technology.
25.
T. F.
Beernaert
,
M. R.
de Baar
,
L. F. P.
Etman
,
I. G. J.
Classen
, and
M.
de Backet
, “
Managing the complexity of plasma physics in control system engineering
,”
Fusion Eng. Des.
203
,
114436
(
2024
).
26.
T. O. S. J.
Bosman
,
F.
Koechi
,
A.
Ho
,
M. R.
de Baar
,
D.
Krishnamoorthy
, and
M.
van Berkel
, “
Integrated model control simulations of the electron density profile and the implications of using multiple discrete pellet injectors for control
,”
Nucl. Fusion
63
,
126047
(
2023
).
27.
C. A.
Orrico
,
M.
van Berkel
,
T. O. S. J.
Bosman
,
W. P. M. H.
Heemels
, and
D.
Krishnamoorthy
, “
Mixed-integer MPC strategies for fueling and density control in fusion tokamaks
,”
IEEE Control Syst. Lett.
7
,
1897
1902
(
2023
).
28.
E.
Maljaars
,
F.
Felici
,
M. R.
de Baar
,
J.
van Dongen
,
G. M. D.
Hogeweij
,
P. J. M.
Geelen
, and
M.
Steinbuch
, “
Control of the tokamak safety factor profile with time-varying constraints using MPC
,”
Nucl. Fusion
55
,
023001
(
2015
).
29.
O.
Meneghini
,
T.
Slendebroek
,
B. C.
Lyons
,
K.
McLaughlin
,
J.
McClenaghan
,
L.
Stagner
,
J.
Harvey
,
T. F.
Neiser
,
A.
Chiozzi
,
G.
Dose
et al, “
FUSE (Fusion Synthesis Engine): A next generation framework for integrated design of fusion pilot plants
,” arXiv:2409.05894 (
2024
).
30.
See https://fuse.help for information on the FUSE software toolkit.
31.
R.
Clark
,
M.
Nurgaliev
,
E.
Khairutdinov
,
G.
Subbotin
,
A.
Welander
, and
D.
Orlov
, “
Validation of NSFsim as a Grad-Shafranov equilibrium solver at DIII-D
,”
Fusion Eng. Des.
211
,
114765
(
2025
).
32.
S.
Guizzo
,
M.
Drabinskiy
,
C.
Hansen
,
A. G.
Kachkin
,
E. N.
Khairutdinov
,
A. O.
Nelson
,
M. R.
Nurgaliev
,
M.
Pharr
,
G.
Subbotin
, and
C.
Paz-Soldan
, “
Electromagnetic system conceptual design for a negative triangularity tokamak
,” arXiv:2501.14682 (
2025
).
33.
G. M.
Staebler
,
J. E.
Kinsey
, and
R. E.
Waltz
, “
Gyro-Landau fluid equations for trapped and passing particles
,”
Phys. Plasmas
12
,
102508
(
2005
).
34.
N. B.
Marushchenko
,
Y.
Turkin
, and
H.
Maassberg
, “
Ray-tracing code TRAVIS for ECR heating, EC current drive, and ECE diagnostic
,”
Comput. Phys. Commun.
185
,
165
(
2014
).
35.
J.
Citrin
,
I.
Goodfellow
,
A.
Raju
,
J.
Chen
,
J.
Degrave
,
C.
Donner
,
F.
Felici
,
P.
Hamel
,
A.
Huber
,
D.
Nikulin
et al, “
TORAX: A fast and differentiable tokamak transport simulator in JAX
,” arXiv:2406.06718 (
2024
).
36.
J.
Bradbury
,
R.
Frostig
,
P.
Hawkins
,
M. J.
Johnson
,
C.
Leary
,
D.
Maclaurin
,
G.
Necula
,
A.
Paszke
,
J.
VanderPlas
,
S.
Wanderman-Milne
, and
Q.
Zhang
, see http://github.com/google/jax/ for “
JAX: Composable transformations of Phython+NumPy programs
” (
2018
).
37.
F.
Felici
and
O.
Sauter
, “
Non-linear model-based optimization of actuator trajectories for tokamak plasma profile control
,”
Plasma Phys. Controlled Fusion
54
(
2
),
025002
(
2012
).
38.
J. M.
Park
,
J. R.
Ferron
,
C.
Holcomb
,
R.
Buttery
,
W. M.
Solomon
,
D. B.
Batchelor
,
W.
Elwasif
,
D. L.
Green
,
K.
Kim
,
O.
Meneghini
et al, “
Integrated modeling of high beta_N steady state scenario on DIII-D
,”
Phys. Plasmas
25
,
0125506
(
2018
).
39.
P. B.
Snyder
,
R. J.
Groebner
,
J. W.
Hughes
,
T. H.
Osborne
,
M.
Beurskens
,
A. W.
Leonard
,
H. R.
Wilson
, and
X. Q.
Xu
, “
A first-principles predictive model of the pedestal height and width: Development, testing, and ITER optimization with the EPED model
,”
Nucl. Fusion
51
,
103016
(
2011
).
40.
M. S.
Shephard
,
J.
Merson
,
O.
Sahni
,
A. E.
Castillo
,
A. Y.
Joshi
,
D. D.
Nath
,
U.
Riaz
,
E. S.
Seol
,
C. W.
Smith
,
C.
Zhang
et al, “
Unstructured mesh tools for magnetically confined fusion system simulations
,”
Eng. Comput.
40
,
3319
3336
(
2024
).
41.
See https://github.com/SCOREC/pcms for information on the Parallel Coupler for Multimodel Systems.
42.
D.
Eldon
,
H.
Anand
,
J. G.
Bak
,
J.
Barr
,
S. H.
Hahn
,
J. H.
Jeong
,
H. S.
Kim
,
H. H.
Lee
,
A. W.
Leonard
, and
B.
Sammuli
, “
Enhancement of detachment control with simplified real-time modeling on the KSTAR tokamak
,”
Plasma Phys. Controlled Fusion
64
,
075002
(
2022
).
43.
B.
Zhu
,
M.
Zhao
,
X. Q.
Xu
,
A.
Gupta
,
K. B.
Kwon
,
X.
Ma
, and
D.
Eldon
, “
Latent space mapping: Revolutionizing predictive models for divertor plasma detachment control
,” arXiv:2502.19654 (
2021
).
44.
B.
Rus
,
F.
Batysta
,
J.
Cap
,
M.
Divoky
,
M.
Fibrich
,
M.
Griffiths
,
R.
Haley
,
T.
Havlicek
,
M.
Hlavac
,
J.
Hrebicek
et al, “
Outline of the ELI-Beamlines facility
,” in
Proceedings SPIE 8080
(
SPIE
,
2011
).
45.
G. W.
Collins
,
C.
McGuffey
,
M.
Jaris
,
D.
Vollmer
,
A.
Dautt-Silva
,
E.
Linsenmayer
,
A.
Keller
,
J. C.
Ramirez
,
B.
Sammuli
,
M.
Margo
, and
M. J.-E.
Manuel
, “
GALADRIEL: A facility for advancing engineering science relevant to rep-rated high energy density physics and inertial fusion energy experiments
,”
Rev. Sci. Instrum.
95
,
113501
(
2024
).
46.
M.
Crawford
and
J.
Barraza
, “
Scorpius: The development of a new multi-pulse radiographic system
,” in
IEEE 21st International Conference on Pulsed Power (PPC)
(
IEEE
,
2017
), pp.
1
6
.
47.
D. A.
Spong
,
M. A.
Van Zeeland
,
W. W.
Heidbrink
,
X.
Du
,
J.
Varela
,
L.
Garcia
, and
Y.
Ghai
, “
Nonlinear dynamics and transport drive by energetic particle instabilities using a gyro-Landau closure model
,”
Nucl. Fusion
61
,
116061
(
2021
).
48.
D. S.
Darrow
,
S. J.
Zweben
,
Z.
Chang
,
C. Z.
Cheng
,
M. D.
Diesso
,
E. D.
Fredrickson
,
E.
Mazzucato
,
R.
Nazikian
,
C. K.
Phillips
, and
S.
Popovichev
, “
Observations of neutral beam and ICRF tail ion losses due to Alfven modes in TFTR
,”
Nucl. Fusion
37
,
939
(
1997
).
49.
R.
Shousha
,
J.
Seo
,
K.
Erickson
,
Z.
Xing
,
S. K.
Kim
,
J.
Abbate
, and
E.
Kolemen
, “
Machine learning-based real-time kinetic profile reconstruction in DIII-D
,”
Nucl. Fusion
64
,
026006
(
2024
).
50.
S. K.
Kim
,
R.
Shousha
,
S. M.
Yang
,
Q.
Hu
,
S. H.
Hahn
,
A.
Jalalvand
,
J. K.
Park
,
N. C.
Logan
,
A. O.
Nelson
,
Y. S.
Na
et al, “
Highest fusion performance without harmful edge energy bursts in tokamaks
,”
Nat. Commun.
15
,
3990
(
2024
).
51.
J.
Seo
,
S. K.
Kim
,
A.
Jalalvand
,
R.
Conlin
,
A.
Rothstein
,
J.
Abbate
,
K.
Erickson
,
J.
Wai
,
R.
Shousha
, and
E.
Kolemen
, “
Avoiding fusion plasma tearing instability with deep reinforcement learning
,”
Nature
626
,
746
751
(
2024
).
52.
A.
Jalalvand
,
S. K.
Kim
,
J.
Seo
,
Q.
Hu
,
M.
Curie
,
P.
Steiner
,
A. O.
Nelson
,
Y. S.
Na
, and
E.
Kolemen
, “
Multimodal super-resolution: Discovering hidden physics and its application to fusion plasmas
,” arXiv:2405.05908 (
2024
).
53.
S.
Wiesen
,
D.
Reiter
,
V.
Kotov
,
M.
Baelmans
,
W.
Dekeyser
,
A. S.
Kukushkin
,
S. W.
Lisgo
,
R. A.
Pitts
,
V.
Rozhansky
,
G.
Saibene
et al, “
The new SOLPS-ITER code package
,”
J. Nucl. Mater.
463
,
480
484
(
2015
).
54.
S.
Dasbach
and
S.
Wiesen
, “
Towards fast surrogate models for interpolation of tokamak edge plasmas
,”
Nucl. Mater. Energy
34
,
101396
(
2023
).
55.
Y.
Poels
,
G.
Derks
,
E.
Westerhof
,
K.
Minartz
,
S.
Wiesen
, and
V.
Menkovski
, “
Fast dynamic 1D simulation of divertor plasmas with neural PDE surrogates
,”
Nucl. Fusion
63
,
126012
(
2023
).
56.
Z.
Li
,
N. B.
Kovachki
,
K.
Azizzadenesheli
,
B.
Liu
,
K.
Bhattacharya
,
A.
Stuart
, and
A.
Anandkumar
, “
Fourier neural operators for parametric partial differential equations
,” in
9th International Conference on Learning Representations
,
2021
. https://openreview.net/forum?id=c8P9NQVtmnO.
57.
V.
Gopakumar
,
S.
Pamela
,
L.
Zanisi
,
Z.
Li
,
A.
Gray
,
D.
Brennand
,
N.
Bhatia
,
G.
Stathopoulos
,
M.
Kusner
,
M. P.
Deisenroth
et al, “
Plasma surrogate modeling using Fourier neural operator
,”
Nucl. Fusion
64
,
056025
(
2024
).