Following the successful demonstration of machine learning (ML) models for laser induced breakdown spectroscopy (LIBS) adaptation in fusion reactor fuel retention monitoring using synthetic data [Gąsior et al., Spectrochim. Acta, Part B 199, 106576 (2023)], this study focuses on implementing operability on experimental data. To achieve this, Simulated Eperimental Spectra (SES) data are generated and used for validation of a chemical composition estimation model trained on dimensionally reduced synthetic spectral data (DRSSD). Principal component analysis is employed for dimensionality reduction of both SES and DRSSD. To simulate real experimental conditions, the synthetic data, generated by a dedicated tool [M. Kastek (2022), “SimulatedLIBS,” Zenodo. http://dx.doi.org/10.5281/zenodo.7369805] is processed through the transmission function of a real spectroscopy setup at IPPLM. Separate and optimized artificial neural network models are implemented for conversion and chemical composition estimation. The conversion model takes DR-SES as features and DR-SSD as targets. Validation using converted SES data demonstrates chemical composition predictions comparable to those from synthetic data, with the highest relative uncertainty increase below 40% and a normalized root-mean-square error of prediction below 7%. This work represents a significant step toward adapting ML-based LIBS for fuel and impurity retention monitoring in the walls of next-generation fusion devices.

Research on harnessing thermonuclear fusion for the development of a fusion power plant capable of providing clean, safe, and abundant energy is one of the most ambitious aims of science.1 The scale of the challenge is increased by the interdisciplinary nature of the effort, which combines not only research on plasma physics but also plenty of other fields, such as material engineering, cryogenics, superconductivity, electronics, chemistry, etc. It also needs to be remembered that the present and next-step devices are still experimental facilities that require numerous diagnostic systems, often providing enormous amounts of data, which are difficult to analyze with classical measurement and computing systems.

Advantageously, recent years have brought remarkable developments in artificial intelligence based on machine learning (ML), which permeated, and in many cases dominated, numerous scientific and unscientific applications. A great example of this phenomenon is image recognition, which is now completely mastered by artificial neural networks (ANNs). Therefore, it is not surprising that ML has drawn the attention of scientists working on thermonuclear fusion technology.

One of the fields in which ML may be especially beneficial is the monitoring of fuel retention and plasma facing components' surface chemical composition, with LIBS (laser induced breakdown spectroscopy)2 being one of the core topics in plasma wall interaction (PWI) research.3 Although LIBS is a mature analytical technique ubiquitous in diverse applications,4–8 much effort has been put into its adaptation for fusion technology,9–14 it faces serious challenges in application for thermonuclear conditions, mainly because of harsh and hazardous conditions as well as inherent difficulties with the elements under investigation [e.g., helium, which imposes very hard requirements on obtaining local thermal equilibrium necessary for the application of calibration free (CF) LIBS15] Taking this into account, even though CF LIBS is at a relatively high level of development,10,14,16 the application of ML methods may be very beneficial for securing LIBS reliability in the next step fusion reactor, ITER. This statement is even bolstered by the large amounts of LIBS data being generated during measurements—a feature which is an obstacle for classical data analysis but an advantage for data oriented ML models. The success of various ML techniques, particularly deep neural networks (DNNs), in diverse LIBS applications17–22 across various fields further strengthens the case for their investigation in the context of fusion technology.

Building upon previous work,23 which demonstrated the viability of various machine learning (ML) techniques for qualitative and quantitative LIBS characterization of fusion-relevant materials using synthetic data, this study further investigates the performance of deep neural networks (DNNs) on quasi-experimental data. These data were generated based on the real transmission function of the spectroscopy setup being operated at the Institute of Plasma Physics and Laser Microfusion.

At the present level of development, it was decided to exploit quasi-experimental data because of the need for confirmation of the proof of concept of the conversion model without the risk of failure caused by potential data inconsistency due to the scarcity of spectra measured under stable and well settled conditions for sufficiently characterized samples. Such an approach will offer the possibility of being well-prepared when an abundance of real experimental data will come with the results of the upcoming LIBS experiments at VTT and JET, which are supposed to bring uniform and consistent results.

Similar to previous work, the investigation described here is concerned with spectra consisting of components from hydrogen, tungsten, and beryllium, which may seem not fully relevant after excluding beryllium from the ITER materials and putting forth the need for the application of boronization instead. Still, it needs to be stressed that the investigation presented here needs to be consistent with EUROfusion projects in which it is ingrained and which were focused on the former ITER composition. Additionally, including beryllium brings the verification of real experimental data closer since the experiments at JET are scheduled for summer 2024 and will bring results for the beryllium tungsten mix, whereas the data for boron are still unavailable. However, the necessity of expanding investigation into boron containing mixes is obvious, and it will be implemented in upcoming research. With the flexibility of the SimulatedLIBS tool, it will be easy to acquire synthetic and even quasi-experimental data; however, for real experimental data, new experiments are required, which have not been scheduled yet.

The methodology section details the adopted approach, including the data preparation and preprocessing procedures. Subsequently, the results for both synthetic and quasi-experimental data are presented. Finally, the study is summarized, and potential future research directions are discussed.

Developing a reliable laser-induced breakdown spectroscopy (LIBS) method for monitoring wall chemical composition in next-step thermonuclear reactors faces two main challenges: hazardous in-vessel conditions and the scarcity of samples relevant to reactor deposits. While the withdrawal of beryllium has alleviated concerns about its toxicity and limited investigation outside certified labs, manufacturing samples with precise and arbitrary hydrogen isotope concentrations remains difficult. Therefore, the ability to train machine learning (ML) models on readily generated synthetic data offers significant advantages. Previous work has demonstrated this possibility, but verifying if models trained on synthetic data can predict real experimental data remains an ongoing challenge addressed in this manuscript.

Figure 1 highlights the core concept: the conversion model. This model takes dimensionally reduced quasi-experimental spectra (SES—simulated experimental spectra) as features and dimensionally reduced synthetic spectral data (DRSSD) as targets. Both data types are initially generated using the SimulatedLIBS package. However, to synthetize SES, after preprocessing, the raw data undergo calibration with the transmission function of the spectroscopic measurement system at IPPLM. Both calibrated and uncalibrated data are subjected to dimensionality reduction using principal component analysis (PCA) to form DRRSD (dimensionally reduced synthetic spectral data) and SES, respectively. While DRRSD serves as targets for the conversion model, they also act as features for the model estimating chemical composition, where the targets are elemental concentrations associated with the SimulatedLIBS spectra. The chemical composition estimation model is trained on DRRSD but validated using SES converted by the conversion model, which is the study's primary objective.

FIG. 1.

Principle of operation.

FIG. 1.

Principle of operation.

Close modal

Both the conversion and chemical composition estimation models are implemented as deep neural networks (DNNs) in Python using TensorFlow/Keras. The model for estimating chemical composition and electron temperature comprises three dense layers with 1024 neurons each. The first two layers employ the tanh activation function, while the last layer uses relu. All model parameters are summarized in Table I.

TABLE I.

Settings for the model for the estimation of chemical composition.

Model setting Value
Activation function  Tanh, for the first and second layers, relu for the third layer 
Regularization  Dropout, a = 0.3 
Optimizer  Adam, lr = 0.000 15, decay = 0.000 01 
Batch size  60 
Loss function  MSE (mean squared error) 
Model setting Value
Activation function  Tanh, for the first and second layers, relu for the third layer 
Regularization  Dropout, a = 0.3 
Optimizer  Adam, lr = 0.000 15, decay = 0.000 01 
Batch size  60 
Loss function  MSE (mean squared error) 

The model utilizes the Adam optimizer,24 a stochastic gradient descent method with adaptive learning rate adjustments based on first- and second-order moment estimates, to minimize the mean squared error (MSE) cost function. Table II details the configuration of the PCA conversion model, which employs four dense layers with 1024 neurons each.

TABLE II.

Settings for the model for PCA conversion.

Model setting Value
Activation function  Tanh, for all for layers 
Regularization  L2, lambda = 0.005 
Optimizer  Adam, lr = 0.0002, decay = 0.0001 
Batch size  100 
Loss function  MSE (mean squared error) 
Model setting Value
Activation function  Tanh, for all for layers 
Regularization  L2, lambda = 0.005 
Optimizer  Adam, lr = 0.0002, decay = 0.0001 
Batch size  100 
Loss function  MSE (mean squared error) 

The parameters shown in the tables were adjusted based on a standard cross-validation procedure running in loops.

Synthetic spectra are generated with the use of the SimulatedLIBS packet and consist of 1000 samples with hydrogen contents in the range of 0%–10% and Be and W contents summing up to 100%. For each spectrum, the electron temperature and density were randomly selected from the ranges of 0.8–2 eV and 0.6–2.0 × 1017 cm−3, respectively. The contents were chosen as relevant for the former anticipation of ITER relevant material; however, in future works, beryllium will be substituted by boron, and residual gases and contaminations will be added. The histograms with concentrations and plasma parameters are shown in Fig. 2.

FIG. 2.

Distribution of chemical components and plasma parameters in synthetic spectra and comparison of the measured and calibrated spectra of the broadband calibration source (bottom-right).

FIG. 2.

Distribution of chemical components and plasma parameters in synthetic spectra and comparison of the measured and calibrated spectra of the broadband calibration source (bottom-right).

Close modal

The spectra are recorded in .csv files and processed in the code as Pandas data frames. To prepare DRSSD spectra, raw spectra are first subjected to dynamics reduction with the use of logarithmic function and then to dimensionality reduction with the use of the Scikit-Learn PCA (principal component analysis) function. 13 components are required to restore 99.9% of information. Similarly, SES spectra are raw spectra with reduced dynamics; however, before dimensionality reduction they are multiplied by the transmission function of a real spectroscopic setup operating at the IPPLM. The transmission function has been recorded with the use of a spectrometer fiber-coupled to a broadband deuterium–tungsten optical source. The detailed description of the calibration process is presented in Ref. 25. The spectrum of the broadband source before and after the calibration is shown in Fig. 1 (bottom-right). After the calibration the spectrum correctly corresponds to the spectrum provided by the lamp manufacturer.

Before verification of the efficiency of the conversion model, it was needed to verify the performance of the chemical composition estimation model to ascertain whether further validation of the conversion model is possible. However, in previous studies, a similar model was confirmed to operate correctly, but in the recent study, different tools were used, thus, validation was again needed. After applying the PCA algorithm to a training set of data (700 random samples), it was determined that 13 principal components covered 99.9% of the variance. This number of components has been used in the subsequent steps. The rest of the spectra (300 spectra) have been expressed in the base corresponding to PCA vectors and used for cross-validation and validation. After optimization described in Sec. II A, the predictions for the contents of H, W, Be, and Te were obtained for the validation dataset. The results are shown in Fig. 3. The uncertainties accompanying the estimation of the aforementioned quantities are gathered in Table III.

FIG. 3.

Predictions of the model for chemical composition estimation for synthetic data.

FIG. 3.

Predictions of the model for chemical composition estimation for synthetic data.

Close modal
TABLE III.

Prediction uncertainties for the performance of the model for the estimation of chemical composition operating on DRSSD.

Quantity R2_score RMSEP, at. %, eVa NRMSEP (%)
0.992  0.262  5.15 
0.997  1.451  2.91 
Be  0.997  1.424  3.16 
Te  0.987  0.046a  3.39 
Quantity R2_score RMSEP, at. %, eVa NRMSEP (%)
0.992  0.262  5.15 
0.997  1.451  2.91 
Be  0.997  1.424  3.16 
Te  0.987  0.046a  3.39 
a

For Te.

The values in the table are consistent with those obtained in former research, which confirms that the model is ready for validation by quasi-experimental data.

Initially, the model for chemical composition estimation was applied to dimensionally reduced quasi-experimental data. While the predicted values did not match the real data precisely, as expected, the predicted ranges were remarkably similar to the actual data spread. This was observed for tungsten, beryllium, and Te, where the predicted ranges nearly perfectly matched the real data. For hydrogen, however, the upper limit of the predicted range was approximately 40% higher than the real data. Sample predictions are presented in Fig. 4.

FIG. 4.

Incorrect predictions of a model before conversion.

FIG. 4.

Incorrect predictions of a model before conversion.

Close modal

To rectify these discrepancies, the quasi-experimental spectra were processed through the conversion model, which had been previously trained on dimensionally reduced quasi-experimental data paired with their synthetic representations as targets. Notably, the quasi-experimental spectra required more components (26 compared to 13 in synthetic spectra) to capture the same level of variance. The predictions for the validation set of quasi-experimental spectra are shown in Fig. 5, with associated uncertainties presented in Table IV.

FIG. 5.

Predictions of the model after conversion.

FIG. 5.

Predictions of the model after conversion.

Close modal
TABLE IV.

Prediction uncertainties for the performance of the model for the estimation of chemical composition operating on converted, dimensionally reduced SES.

Quantity R2_score RMSEP, at. %, eVa NRMSEP (%)
0.984  0.351  6.99 
0.996  1.703  3.43 
Be  0.996  1.642  3.62 
Te  0.994  0.027  1.93 
Quantity R2_score RMSEP, at. %, eVa NRMSEP (%)
0.984  0.351  6.99 
0.996  1.703  3.43 
Be  0.996  1.642  3.62 
Te  0.994  0.027  1.93 
a

For Te.

The results presented in Figs. 4 and 5, and Table IV confirm the effectiveness of the conversion model in correcting the initial predictions from the chemical composition estimation model. While the predicted hydrogen concentration exhibits a higher relative error compared to synthetic data (still less than 2 percentage point—see Tables III and IV), the overall normalized root-mean-square error of prediction (RMSEP) remains below 7% (corresponding to an RMSE of 0.351 at. %). Such a behavior may be explained by a lower H representation in data, which may be indirectly attributed to physics: hydrogen lines are lower than the Be lines and not as numerous as the W lines, which leads to its weaker influence on PCA results and, thus, lower influence on the training process in DNN. Nevertheless, this level of accuracy is considered satisfactory for fusion applications.

Interestingly, the uncertainties associated with beryllium and tungsten concentration predictions only increased by ∼0.5 percentage point compared to synthetic data, likely due to the larger variation in their content during training. However, the decrease in uncertainty for electron temperature estimation requires further investigation to determine if it represents a systematic trend or a statistical anomaly.

This study presents the first application of a conversion model for dimensionally reduced quasi-experimental data relevant to LIBS-based monitoring of fuel retention and wall composition in fusion devices. Beyond its satisfactory performance in estimating chemical composition, the model also enabled accurate plasma electron temperature estimation. This capability holds particular value for testing calibration-free LIBS as a supplementary verification method, as previously documented in the literature.

The successful development of the conversion model paves the way for future endeavors utilizing real experimental data. Such data will soon become available through the LIBS for JET experiment and preparatory experiments at VTT, Finland. Admittedly, in comparison with quasi-experimental, real experimental spectra are more complicated and constitute more spectral features, most often resulting from non-ideality of the measurement equipment as well as contaminations not present in the model. It is especially relevant for spectrometers employing echelle gratings for which transmission is a discontinuous function of the wavelength. However, there are a number of data science algorithms, such as filtering, finding outliers, feature selection, bootstrapping, etc., which are commonly used for similar problems in machine learning. Although, it is popularly assumed that deep learning does not require preprocessing, one should remember that a model needs to be trained on data containing relevant information and is not affected by unwanted influence (e.g., in image recognition there were accidents that the model learned to fit based on the file signature instead of data).

Additionally, further work is necessary on synthetic data preparation due to the elimination of beryllium from ITER materials and the inclusion of boron as a critical constituent due to anticipated boronization of the tungsten wall in next-step devices. As it has been mentioned in the introduction, taking boron into account for synthetic/quasi-experimental is rather straightforward and is in scope of the nearest research, expanding investigation to experimental results will require additional experimental works.

This scientific paper has been published as part of the international project co-financed by the Polish Ministry of Science and Higher Education within the program called “PMW” for 2023. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200—EUROfusion). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.

The authors have no conflicts to disclose.

P. Gąsior: Conceptualization (lead); Data curation (equal); Formal analysis (lead); Funding acquisition (lead); Investigation (lead); Methodology (lead); Project administration (lead); Resources (equal); Software (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (lead); Writing – review & editing (lead). M. Kastek: Conceptualization (supporting); Data curation (equal); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Software (equal); Writing – original draft (supporting); Writing – review & editing (supporting). M. Ladygina: Investigation (supporting); Validation (supporting); Writing – original draft (supporting); Writing – review & editing (supporting). D. Sokulski: Conceptualization (supporting); Data curation (equal); Investigation (equal); Methodology (supporting); Resources (supporting); Software (equal); Validation (supporting); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (supporting).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

This appendix provides a concise overview of the key tools and algorithms used in the manuscript. Due to space limitations, the focus is on a practical understanding rather than a formal mathematical treatment. Readers seeking in-depth mathematical details are encouraged to consult the cited references or explore the wealth of online resources available on these topics.

1. Simulated LIBS package

SimulatedLIBS provides a Python class which allows to simulate, store, and visualize LIBS spectra based on the NIST LIBS tool available. In principle, it serves as a tool to generate sets of LIBS data with arbitrary chemical composition, plasma parameters, and observation resolution.

After installation in a manner standard in Python (pip install SimulatedLIBS), one can simulate a spectrum of the given parameters with a function, simulation.SimulatedLIBS, with the following parameters:

  • Te: electron temperature [eV]

  • Ne: electron density [cm̂-3]

  • elements: list of elements

  • percentages: list of elements concentrations

  • resolution

  • wavelength range: low_w, upper_w

  • maximal ion charge: max_ion_charge

  • webscraping: “static” or “dynamic.”

An example of the command for spectra generation is the following:

libs = simulation.SimulatedLIBS(Te=1.0,

        Ne=10**17,

        elements=['W','Fe','Mo'],

        percentages=[50,25,25],

        resolution=1000,

        low_w=200,

        upper_w=1000,

        max_ion_charge=3,

        webscraping=“static”)

In dynamic webscraping, the results for the individual charge states of individual elements may be accessed separately with the use of Pandas data frame named ion_spectra and plotted with libs.plot_ion_spectra() command [libs here is an object of class spectrum created by simulation(…) function]. If webscraping is set to static, only the integrated contribution of elements is available.

Moreover, it is also possible to generate spectra of a set of samples with chemical compositions provided in a .csv file with a random distribution of plasma parameters given in ranges of Te and ne. This and other capabilities of SimulatedLIBS are described in detail in its documentation at PyPI.26 

2. Deep artificial neural networks (Deep ANNs)
Deep ANNs are a powerful tool in machine learning, inspired by the structure and function of the human brain. To understand deep ANNs, a simpler concept as linear regression may be used as the starting point. Linear regression is a well-known statistical method for modeling a relationship between a dependent variable (y) and one or more independent variables (x). It assumes a linear relationship between x and y, represented by the following equation:
y = mx + yb ,
where m is the slope and b is the y-intercept. Linear regression is a good starting point, but it cannot capture complex relationships between variables, especially nonlinear dependences.

To deal with these issues, an ANN is a network of interconnected artificial neurons (see Fig. 6), loosely mimicking biological neurons. Each neuron receives input signals, processes them using an activation function, and outputs a signal. The activation function introduces nonlinearity, allowing the network to model more complex relationships.

FIG. 6.

Topology of DNN with two hidden layers with inputs i1, i2, … in and outputs o1, o2, … on, arrows correspond to weights which are under optimization during training, input signals multiplied by weights are summed up , subjected to activation function φ, and propagated to a further layer.

FIG. 6.

Topology of DNN with two hidden layers with inputs i1, i2, … in and outputs o1, o2, … on, arrows correspond to weights which are under optimization during training, input signals multiplied by weights are summed up , subjected to activation function φ, and propagated to a further layer.

Close modal

An ANN typically consists of three layers:

  • Input Layer: Receives input data.

  • Hidden Layers: There can be one or more hidden layers, each containing multiple artificial neurons. These layers perform the main computations.

  • Output Layer: Produces the final output, like a prediction or classification.

The connections between neurons (blue arrows in Fig. 6) have weights that determine the strength of the signal. These weights are adjusted during the training process which employs the backpropagation algorithm. Training a deep ANN involves feeding it with data and adjusting the weights to minimize the error between the predicted and actual outputs and backpropagation is a critical algorithm for this process. It works by calculating the error at the output layer and propagating it backward through the network, adjusting the weights of each neuron along the way. This iterative process allows the network to learn complex patterns from the data.

Advantages of Deep ANNs:

  • High Capacity: Deep ANNs, with their multiple hidden layers, can learn complex relationships in data that traditional models struggle with.

  • Feature Extraction: Deep ANNs can automatically learn features from raw data, eliminating the need for manual feature engineering.

  • Versatility: Deep ANNs can be applied to various tasks, including image recognition, natural language processing, and time series forecasting.

Drawbacks of Deep ANNs:

  • Complexity: Deep ANNs can be complex to design and train, requiring significant computational power and expertise.

  • Data Hunger: Deep ANNs often require large amounts of data to train effectively.

  • Black Box Problem: Deep ANNs can be difficult to interpret, making it challenging to understand how they arrive at their outputs.

Presently, thanks to increase in computational capabilities of modern computers, artificial neural networks have been widely spread and they are under dynamic development in many fields, among which the most famous became LLMs (large language models). Because of these, there are many resources and numerous tools; however, the topic is too broad to be delved into in this manuscript, especially as much information may be easily found on the Internet. As a very good source of knowledge, a book “Deep learning with Python”27 by the creator of KERAS open-source library may be recommended.

3. PCA—Principal component analysis

Principal component analysis (PCA) is a popular method of dimensionality reduction commonly used not only in spectroscopic data processing but also in machine learning in general. It reduces the amount of data in subsequent algorithms and models, whereas preserves significant patterns and trends present in the original dataset. Admittedly, the algorithm belongs to the class of lossy conversion, which means that a certain amount of information is lost; however, it can be controlled by the number of principal components. Most often, this control is available by specifying the required amount of cumulative explained variance and increasing the number of components until this level is obtained. Cumulative explained variance is a measure of how much of the total variance in the original dataset is explained by the contribution of the principal components.

Principal component analysis (PCA) involves the construction of new latent variables, termed principal components (PCs), as linear combinations of the original data. These combinations are designed to be uncorrelated and capture the maximum variance in the data with a decreasing amount of information retained in subsequent components. In essence, PCA transforms a set of p-dimensional data points into a new coordinate system defined by p PCs, where the first PC accounts for the largest proportion of the total variance, followed by the second PC, and so on.

In PCA, after initial preprocessing by averaging the input vectors (if data from a single measurement are a column vector, then the averaging is performed by the rows of matrix composed of them) and normalization/standardization, the input data are used to prepare covariance matrix, which is a n × n symmetric matrix (where n is the number of variables) that has as entries the covariances associated with all possible pairs of the initial variables. Principal components (PCs) are identified through the eigenvectors and eigenvalues of the covariance matrix. Eigenvectors represent the directions of maximum variance in the data, capturing the most informative features. In essence, they define the axes of a new coordinate system within which the data reside. Eigenvalues, associated with each eigenvector, quantify the amount of variance explained by the corresponding PC. Ranking eigenvalues in descending order allows us to identify the most significant PCs, which capture the largest proportion of the total data variance.

1.
R.
Aymar
,
P.
Barabaschi
, and
Y.
Shimomura
, “
The ITER design
,”
Plasma Phys. Controlled Fusion
44
(
5
),
519
565
(
2002
).
2.
R.
Noll
,
Laser-Induced Breakdown Spectroscopy
(
Springer Berlin
,
2012
), p.
505
.
3.
S.
Brezinsek
,
J. W.
Coenen
,
T.
Schwarz-Selinger
,
K.
Schmid
,
A.
Kirschner
,
A.
Hakola
,
F. L.
Tabares
,
H. J.
Van Der Meiden
,
M.-L.
Mayoral
,
M.
Reinhart
,
E.
Tsitrone
,
T.
Ahlgren
,
M.
Aints
,
M.
Airila
,
S.
Almaviva
,
E.
Alves
,
T.
Angot
,
V.
Anita
,
R.
Arredondo Parra
,
F.
Aumayr
,
M.
Balden
,
J.
Bauer
,
M.
Ben Yaala
,
B. M.
Berger
,
R.
Bisson
,
C.
Björkas
,
I.
Bogdanovic Radovic
,
D.
Borodin
,
J.
Bucalossi
,
J.
Butikova
,
B.
Butoi
,
I.
Čadež
,
R.
Caniello
,
L.
Caneve
,
G.
Cartry
,
N.
Catarino
,
M.
Čekada
,
G.
Ciraolo
,
L.
Ciupinski
,
F.
Colao
,
Y.
Corre
,
C.
Costin
,
T.
Craciunescu
,
A.
Cremona
,
M.
De Angeli
,
A.
De Castro
,
R.
Dejarnac
,
D.
Dellasega
,
P.
Dinca
,
T.
Dittmar
,
C.
Dobrea
,
P.
Hansen
,
A.
Drenik
,
T.
Eich
,
S.
Elgeti
,
D.
Falie
,
N.
Fedorczak
,
Y.
Ferro
,
T.
Fornal
,
E.
Fortuna-Zalesna
,
L.
Gao
,
P.
Gasior
,
M.
Gherendi
,
F.
Ghezzi
,
Ž.
Gosar
,
H.
Greuner
,
E.
Grigore
,
C.
Grisolia
,
M.
Groth
,
M.
Gruca
,
J.
Grzonka
,
J. P.
Gunn
,
K.
Hassouni
,
K.
Heinola
,
T.
Höschen
,
S.
Huber
,
W.
Jacob
,
I.
Jepu
,
X.
Jiang
,
I.
Jogi
,
A.
Kaiser
,
J.
Karhunen
,
M.
Kelemen
,
M.
Köppen
,
H. R.
Koslowski
,
A.
Kreter
,
M.
Kubkowska
,
M.
Laan
,
L.
Laguardia
,
A.
Lahtinen
,
A.
Lasa
,
V.
Lazic
,
N.
Lemahieu
,
J.
Likonen
,
J.
Linke
,
A.
Litnovsky
,
C.
Linsmeier
,
T.
Loewenhoff
,
C.
Lungu
,
M.
Lungu
,
G.
Maddaluno
,
H.
Maier
,
T.
Makkonen
,
A.
Manhard
,
Y.
Marandet
,
S.
Markelj
,
L.
Marot
,
C.
Martin
,
A. B.
Martin-Rojo
,
Y.
Martynova
,
R.
Mateus
,
D.
Matveev
,
M.
Mayer
,
G.
Meisl
,
N.
Mellet
,
A.
Michau
,
J.
Miettunen
,
S.
Möller
,
T. W.
Morgan
,
J.
Mougenot
,
M.
Mozetič
,
V.
Nemanič
,
R.
Neu
,
K.
Nordlund
,
M.
Oberkofler
,
E.
Oyarzabal
,
M.
Panjan
,
C.
Pardanaud
,
P.
Paris
,
M.
Passoni
,
B.
Pegourie
,
P.
Pelicon
,
P.
Petersson
,
K.
Piip
,
G.
Pintsuk
,
G. O.
Pompilian
,
G.
Popa
,
C.
Porosnicu
,
G.
Primc
,
M.
Probst
,
J.
Räisänen
,
M.
Rasinski
,
S.
Ratynskaia
,
D.
Reiser
,
D.
Ricci
,
M.
Richou
,
J.
Riesch
,
G.
Riva
,
M.
Rosinski
,
P.
Roubin
,
M.
Rubel
,
C.
Ruset
,
E.
Safi
,
G.
Sergienko
,
Z.
Siketic
,
A.
Sima
,
B.
Spilker
,
R.
Stadlmayr
,
I.
Steudel
,
P.
Ström
,
T.
Tadic
,
D.
Tafalla
,
I.
Tale
,
D.
Terentyev
,
A.
Terra
,
V.
Tiron
,
I.
Tiseanu
,
P.
Tolias
,
D.
Tskhakaya
,
A.
Uccello
,
B.
Unterberg
,
I.
Uytdenhoven
,
E.
Vassallo
,
P.
Vavpetič
,
P.
Veis
,
I. L.
Velicu
,
J. W. M.
Vernimmen
,
A.
Voitkans
,
U.
Von Toussaint
,
A.
Weckmann
,
M.
Wirtz
,
A.
Založnik
, and
R.
Zaplotnik
, “
Plasma-wall interaction studies within the EUROfusion consortium: Progress on plasma-facing components development and qualification
,”
Nucl. Fusion
57
(
11
),
116041
(
2017
).
4.
J.
Thomas
and
H. C.
Joshi
, “
Review on laser-induced breakdown spectroscopy: Methodology and technical developments
,”
Appl. Spectrosc. Rev.
59
(
1
),
124
155
(
2023
).
5.
Z.
Wang
,
M. S.
Afgan
,
W.
Gu
,
Y.
Song
,
Y.
Wang
,
Z.
Hou
,
W.
Song
, and
Z.
Li
, “
Recent advances in laser-induced breakdown spectroscopy quantification: From fundamental understanding to data processing
,”
TrAC, Trends Anal. Chem.
143
,
116385
(
2021
).
6.
S. J.
Rehse
,
H.
Salimnia
, and
A. W.
Miziolek
, “
Laser-induced breakdown spectroscopy (LIBS): An overview of recent progress and future potential for biomedical applications
,”
J. Med. Eng. Technol.
36
(
2
),
77
89
(
2012
).
7.
J. D.
Pedarnig
,
S.
Trautner
,
S.
Grünberger
,
N.
Giannakaris
,
S.
Eschlböck-Fuchs
, and
J.
Hofstadler
, “
Review of element analysis of industrial materials by in-line laser—induced breakdown spectroscopy (LIBS)
,”
Appl. Sci.
11
(
19
),
9274
(
2021
).
8.
S.
Maurice
,
R. C.
Wiens
,
M.
Saccoccio
,
B.
Barraclough
,
O.
Gasnault
,
O.
Forni
,
N.
Mangold
,
D.
Baratoux
,
S.
Bender
,
G.
Berger
,
J.
Bernardin
,
M.
Berthé
,
N.
Bridges
,
D.
Blaney
,
M.
Bouyé
,
P.
Caïs
,
B.
Clark
,
S.
Clegg
,
A.
Cousin
,
D.
Cremers
,
A.
Cros
,
L.
Deflores
,
C.
Derycke
,
B.
Dingler
,
G.
Dromart
,
B.
Dubois
,
M.
Dupieux
,
E.
Durand
,
L.
D'Uston
,
C.
Fabre
,
B.
Faure
,
A.
Gaboriaud
,
T.
Gharsa
,
K.
Herkenhoff
,
E.
Kan
,
L.
Kirkland
,
D.
Kouach
,
J. L.
Lacour
,
Y.
Langevin
,
J.
Lasue
,
S. L.
Mouélic
,
M.
Lescure
,
E.
Lewin
,
D.
Limonadi
,
G.
Manhés
,
P.
Mauchien
,
C.
McKay
,
P. Y.
Meslin
,
Y.
Michel
,
E.
Miller
,
H. E.
Newsom
,
G.
Orttner
,
A.
Paillet
,
L.
Parés
,
Y.
Parot
,
R.
Pérez
,
P.
Pinet
,
F.
Poitrasson
,
B.
Quertier
,
B.
Sallé
,
C.
Sotin
,
V.
Sautter
,
H.
Séran
,
J. J.
Simmonds
,
J. B.
Sirven
,
R.
Stiglich
,
N.
Striebig
,
J. J.
Thocaven
,
M. J.
Toplis
, and
D.
Vaniman
, “
The ChemCam instrument suite on the Mars Science Laboratory (MSL) rover: Science objectives and mast unit description
,”
Space Sci. Rev
170
(
1–4
),
95
166
(
2012
).
9.
G. S.
Maurya
,
A.
Marín-Roldán
,
P.
Veis
,
A. K.
Pathak
, and
P.
Sen
, “
A review of the LIBS analysis for the plasma-facing components diagnostics
,”
J. Nucl. Mater.
541
,
152417
(
2020
).
10.
H. J.
van der Meiden
,
S.
Almaviva
,
J.
Butikova
,
V.
Dwivedi
,
P.
Gasior
,
W.
Gromelski
,
A.
Hakola
,
X.
Jiang
,
I.
Jõgi
,
J.
Karhunen
,
M.
Kubkowska
,
M.
Laan
,
G.
Maddaluno
,
A.
Marín-Roldán
,
P.
Paris
,
K.
Piip
,
M.
Pisarčík
,
G.
Sergienko
,
M.
Veis
,
P.
Veis
,
S.
Brezinsek
, and
the EUROfusion WP PFC Team
, “
Monitoring of tritium and impurities in the first wall of fusion devices using a LIBS based diagnostic
,”
Nucl. Fusion
61
(
12
),
125001
(
2021
).
11.
P.
Gąsior
, “
Laser-induced breakdown spectroscopy as diagnostics for plasma-wall interactions monitoring in tokamaks
,”
Acta Phys. Pol., A
138
(
4
),
601
(
2020
).
12.
S.
Almaviva
,
L.
Caneve
,
F.
Colao
,
G.
Maddaluno
,
N.
Krawczyk
,
A.
Czarnecka
,
P.
Gasior
,
M.
Kubkowska
, and
M.
Lepek
, “
Measurements of deuterium retention and surface elemental composition with double pulse laser induced breakdown spectroscopy
,”
Phys. Scr.
T167
(
T167
),
014043
(
2016
).
13.
V.
Philipps
,
A.
Malaquias
,
A.
Hakola
,
J.
Karhunen
,
G.
Maddaluno
,
S.
Almaviva
,
L.
Caneve
,
F.
Colao
,
E.
Fortuna
,
P.
Gasior
,
M.
Kubkowska
,
A.
Czarnecka
,
M.
Laan
,
A.
Lissovski
,
P.
Paris
,
H. J.
van der Meiden
,
P.
Petersson
,
M.
Rubel
,
A.
Huber
,
M.
Zlobinski
,
B.
Schweer
,
N.
Gierse
,
Q.
Xiao
, and
G.
Sergienko
, “
Development of laser-based techniques for in situ characterization of the first wall in ITER and future fusion devices
,”
Nucl. Fusion
53
(
9
),
093002
(
2013
).
14.
V.
Dwivedi
,
A.
Marín-Roldán
,
J.
Karhunen
,
P.
Paris
,
I.
Jõgi
,
C.
Porosnicu
,
C. P.
Lungu
,
H.
van der Meiden
,
A.
Hakola
, and
P.
Veis
, “
CF-LIBS quantification and depth profile analysis of Be coating mixed layers
,”
Nucl. Mater. Energy
27
,
100990
(
2021
).
15.
F.
Poggialini
,
B.
Campanella
,
B.
Cocciaro
,
G.
Lorenzetti
,
V.
Palleschi
, and
S.
Legnaioli
, “
Catching up on calibration-free LIBS
,”
J. Anal. At. Spectrom.
38
(
9
),
1751
1771
(
2023
).
16.
F.
Colao
,
S.
Almaviva
,
L.
Caneve
,
G.
Maddaluno
,
T.
Fornal
,
P.
Gasior
,
M.
Kubkowska
, and
M.
Rosinski
, “
LIBS experiments for quantitative detection of retained fuel
,”
Nucl. Mater. Energy
12
,
133
138
(
2017
).
17.
T.
Zhang
,
H.
Tang
, and
H.
Li
, “
Chemometrics in laser-induced breakdown spectroscopy
,”
J. Chemom.
32
(
11
),
1
18
(
2018
).
18.
Y.
Huang
,
S. S.
Harilal
,
A.
Bais
, and
A. E.
Hussein
, “
Progress toward machine learning methodologies for laser-induced breakdown spectroscopy with an emphasis on soil analysis
,”
IEEE Trans. Plasma Sci.
51
(
7
),
1729
1749
(
2023
).
19.
C.
Sun
,
W.
Xu
,
Y.
Tan
,
Y.
Zhang
,
Z.
Yue
,
S.
Shabbir
,
M.
Wu
,
L.
Zou
,
F.
Chen
, and
J.
Yu
, “
From machine learning to transfer learning in laser-induced breakdown spectroscopy: The case of rock analysis for mars exploration
,”
Sci. Rep.
11
,
21379
(
2021
).
20.
L. N.
Li
,
X. F.
Liu
,
F.
Yang
,
W. M.
Xu
,
J. Y.
Wang
, and
R.
Shu
, “
A review of artificial neural network based chemometrics applied in laser-induced breakdown spectroscopy analysis
,”
Spectrochim. Acta, Part B
180
,
106183
(
2021
).
21.
Y.
Zhao
,
M.
Lamine Guindo
,
X.
Xu
,
M.
Sun
,
J.
Peng
,
F.
Liu
, and
Y.
He
, “
Deep learning associated with laser-induced breakdown spectroscopy (LIBS) for the prediction of lead in soil
,”
Appl. Spectrosc.
73
(
5
),
565
573
(
2019
).
22.
C.
Sun
,
Y.
Tian
,
L.
Gao
,
Y.
Niu
,
T.
Zhang
,
H.
Li
,
Y.
Zhang
,
Z.
Yue
,
N.
Delepine-Gilon
, and
J.
Yu
, “
Machine learning allows calibration models to predict trace element concentration in soils with generalized LIBS spectra
,”
Sci. Rep.
9
(
1
),
11363
(
2019
).
23.
P.
Gąsior
,
W.
Gromelski
,
M.
Kastek
, and
A.
Kwaśnik
, “
Analysis of hydrogen isotopes retention in thermonuclear reactors with LIBS supported by machine learning
,”
Spectrochim. Acta, Part B
199
,
106576
(
2023
).
24.
D. P.
Kingma
and
J. L.
Ba
, “
Adam: A method for stochastic optimization
,” in
3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings
,
2014
.
25.
W.
Gromelski
and
P.
Gąsior
, “
Absolute calibration of LIBS data
,” in
Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments
, edited by
R. S.
Romaniuk
and
M.
Linczuk
(
SPIE
,
2018
), p.
83
.
26.
M.
Kastek
(
2022
), “SimulatedLIBS,”
Zenodo
. http://dx.doi.org/10.5281/zenodo.7369805
27.
F.
Chollet
,
Deep Learning with Python
(
Manning Publications
,
New York
,
2017
).