The high power laser system at Extreme Light Infrastructure—Nuclear Physics has demonstrated 10 PW power shot capability. It can also deliver beams with powers of 1 PW and 100 TW in several different experimental areas that carry out dedicated sets of experiments. An array of diagnostics is deployed to characterize the laser beam spatial profiles and to monitor their evolution during the amplification stages. Some of the essential near-field and far-field profiles acquired with CCD cameras are monitored constantly on a large screen television for visual observation and for decision making concerning the control and tuning of the laser beams. Here, we present results on the beam profile classification obtained from datasets with over 14 600 near-field and far-field images acquired during two days of laser operation at 1 PW and 100 TW. We utilize supervised and unsupervised machine learning models based on trained neural networks and an autoencoder. These results constitute an early demonstration of machine learning being used as a tool in the laser system data classification.

High-power pulsed lasers allow us to push forward the frontiers of science by developing experiments that improve our understanding in physics at high energy densities and by proposing new techniques for particle acceleration or radiation generation.1,2 Some examples of experiments with PW-class lasers include the efficient acceleration of electrons and ions to energies comparable to those obtained in conventional accelerators, the generation of intense fluxes of x rays and gamma rays, and the creation of hot transient plasmas, which can simulate the processes observed in astrophysical objects.3–5 

Typically, a high-power laser incorporates a large number of optical components, such as mirrors, gratings, crystals, and filters, in order to shape the laser beam pulse to the desired parameters. The alignment, calibration, and tuning of increasingly complex laser systems are time-intensive and require human resources and effort, which can become a substantial burden on rapid experimental progress. For the Extreme Light Infrastructure—Nuclear Physics (ELI-NP) PW-class lasers, the energy per pulse is of order 25–250 J, while the pulse duration is 25 fs.6,7 In typical experiments, the laser beam is focused down to a spot of a few to tens of micrometers, resulting in intensities at the level of 10181023 W cm−2. The interaction of the laser pulse with solid or gaseous targets results in the formation of plasma with a lifetime of the order of tens to hundreds of picoseconds, which emits high fluxes of photons with a wide range of energies, from visible and UV to X-rays. Any changes in the optics configuration or even in the position of a component can have a dramatic influence on the beam properties and the output of the experiment. Moreover, day-to-day fluctuations of the temperature and humidity in the experimental area or in the laser bay can yield beams with slightly different parameters.

The application of machine learning (ML) tools to such highly complex laser systems can provide researchers with assistance in expediting and improving the experimental setups and demonstrate the usefulness of machine-based informed solutions. ML is a branch of artificial intelligence (AI), which aims to process data using algorithms that mimic human learning or the functioning of biological neurons as an ensemble. In physics research, the utilization of ML is rapidly progressing. It encompasses multiple topics from lasers, to plasmas, fusion devices, particle accelerators, etc. Deep learning, a direction of ML, focuses on artificial neural networks (ANNs) with multiple hidden layers.8 Convolutional neural networks (CNNs), a class of ANNs, excel at image recognition and feature extraction.9 ML can be classified into supervised learning (SL) and unsupervised learning (UL).10 In SL, a training set with labeled data is used to teach the machine the characteristics of the data and generate machine-informed outputs. The model learns over time, and the algorithm minimizes a loss function to optimize predictions. In UL, the input data are unlabeled, and the model analyzes and clusters the datasets by identifying similarities, characteristics, or hidden patterns.

ANNs have a distinctive ability to classify and predict features with high accuracy when provided with large datasets. This condition is fulfilled in our present study as we are dealing with a large number of images provided by imaging diagnostics such as CCD cameras. This powerful ability allows us to take a first step toward turning an established AI technique into future scientific discovery.

ML has demonstrated its unique capability to provide new insights and solutions to a diversity of problems in areas of physics.11 Only recently, the ML tools have been employed to classify laser beam profiles depending on characteristics (e.g., multiple spots and beams featuring vorticity).12,13 The spatiotemporal laser beam profiles could also be processed using deep learning convolutional networks.14–20 Furthermore, ML shows high potential in processing image datasets obtained from laser–plasma accelerators21–25 or from classical accelerators.26 Plasma imaging is a field strongly inter-connected with laser–plasma accelerators and is one of the crucial diagnostics that allows for direct observation of the interaction between the high power laser and the ionized target. Deep learning can be used to classify images of laser-produced plasmas,27 plasma jets,28 or particle trajectories in plasmas.29 

In this paper, we present a classification of over fourteen thousand images showing laser beam profiles by employing ANN and CNNs. This work is carried out with graphics processing unit (GPU) implementation using ML algorithms in combination with computer vision, designed for the Python programming language and for the MATLAB environment. We demonstrate immediate near-field (NF) and far-field (FF) image classification with an accuracy and processing time depending on the ML tools utilized. Our work proposes a different approach in the area of manipulation of imaging data produced by the diagnostics of high-power lasers. This can potentially be extended in the future to include automation of tedious yet critical tasks that are currently only possible with human expertise, such as optics-informed alignment of the laser beams. The NF and FF images are acquired by CCD cameras for the purpose of assessing the operating states of the high-power laser system (HPLS). Typically, tens of thousands of images are collected daily during experiments with the HPLS, which are further stored. In this work, we apply the tools of AI on a set of this database with the goal of reducing laboratory workload and minimizing human intervention.

The HPLS architecture is based on the chirped pulse amplification (CPA) scheme using a Ti:sapphire gain medium for the front-end and high energy amplifiers, which ultimately leads to the obtaining of ultrashort pulses with duration in the 20–30 fs range.30 A sketch of the system is shown in Fig. 1, while a detailed description of the HPLS can be found in Lureau et al.31 The HPLS has two identical arms that each produces pulses with powers up to 10 PW. Here, we present data linked to the operation of only one arm.

FIG. 1.

Schematic configuration of the HPLS, with the CCD cameras’ locations and their names (e.g., FE2_S1, B11, and ATW1) and the type of images they record (e.g., NF and FF). Multi-spot images are recorded by FE2_CP and B11.

FIG. 1.

Schematic configuration of the HPLS, with the CCD cameras’ locations and their names (e.g., FE2_S1, B11, and ATW1) and the type of images they record (e.g., NF and FF). Multi-spot images are recorded by FE2_CP and B11.

Close modal

The front end (FE) generates pulses that are passed through a beam splitter, concurrently seeding the two amplification arms. The pulses produced by the FE have an energy of 10 mJ, which are then split equally into 5 mJ pulses along the two amplification arms. The pulse duration at this stage is about 15 ps. The pulse is then stretched in time (600–900 ps), spectrally filtered and then amplified. At the end of the chain, the pulse is passed through a large grating compressor to achieve the ultra-short pulse duration.

There are three amplification stages (Amp1, Amp2, and Amp3) with each delivering a laser beam that is extracted and then compressed to the final 25 fs pulse duration. While the pulse duration is similar in the three outputs, the energy per pulse before compression and repetition rate varies as follows: 3.5 J at 10 Hz, 35 J at 1 Hz, and 327 J at 0.016 Hz, respectively. The energy efficiency of the compressors is about 75%. The three laser beams have powers of 100 TW, 1 PW, and 10 PW, respectively, each catering to a dedicated experimental area. The shape of the output laser beams is a super-Gaussian, while the diameter in the experimental areas ranges from 70 mm at 100 TW to 200 mm at 1 PW and, finally, 550 mm at 10 PW. The 100 TW and 1 PW laser-driven experimental areas have been commissioned, while the full commissioning of the 10 PW experimental area is underway.

Each compressor has a diagnostic bench for monitoring the laser pulse energy and spectrum, its NF and FF, as well as its temporal contrast and duration. Dedicated CCDs continuously record the NF and FF images of all sequences of fired output laser pulses. The recorded images are stored into a HDF5 library. Some of these beam profiles are broadcast onto a large screen monitor in the laser control room in order to monitor in real time the laser beam quality and alignment. While the NF indicates the relative position of the full beam, the FF is essential for controlling the beam pointing stability from shot to shot, which is a critical issue for laser-driven experiments at ELI-NP.

We use as an input for our ML algorithms the images of the laser beam profiles recorded from the 100 TW and 1 PW output diagnostics, i.e., FE2_S1/CP, B11, B12, A12, ATW1, BTW1, ATW2, and BTW2.32 The images were recorded in two days of operation related to the initial tuning of the laser beams, alignment in the laser bay and in the experimental areas, and fired shots on targets. The images were extracted from the original HDF5 file, which contained most of the beam parameters. All images have a resolution of 128 × 102 pixels and a grayscale with 6 bits color depth except those acquired by the CCD camera FE2_S1, which have 124 × 99 pixels. A sample of images is shown in Fig. 2. The full set is composed of 7985 images with NF profiles, 2902 images with FF profiles, 3536 images with multi-spot profiles, and 183 noise images. A total of 14 606 images are employed in our classification algorithms.

FIG. 2.

Sample of the input images: (a) row with NF profiles, (b) row with FF profiles, (c) row with multi-spot profiles, and (d) row with noise images.

FIG. 2.

Sample of the input images: (a) row with NF profiles, (b) row with FF profiles, (c) row with multi-spot profiles, and (d) row with noise images.

Close modal

One can distinguish between the NF, FF, images with multiple spots, which are a result of using a system of wedges (each wedge having a reflection factor of less than 1%) for lowering the beam energy, and blank (or noise) images, which do not contain any of the previous beam information. Due to internal reflections on the wedge faces, the NF profile can appear as a double or even triple spot, depending of the angle of the wedges, although their alignment is optimized in order to suppress as much as possible this effect. In this way, the energy of the laser beam is reduced by 3–7 orders of magnitude from the nominal maximum 250 J level, when using one or several wedges. In a range of a few mJ to 10’s mJ, the optics and the beam diagnostics along the propagation chain to the experimental area can be aligned and calibrated. Finally, the 10’s μJ range is useful for aligning the focal spot of the laser beam onto the target by looking directly to it with a dedicated CCD camera equipped with a microscope objective.

The challenge in the accurate detection and classification of the laser beam profiles resides in the distribution of their intensity and in their scale size. A typical NF profile covers a surface with 5–6 × 103 pixels, with a total summed pixel intensity of about 5–7 × 105. A well-focused FF profile is much smaller in size having an area of only 15–30 pixels with a total intensity 2–2.2 × 103. These images featuring different profiles vary by at least two orders of magnitude in size and intensity. Also concerning the edge detection of the profiles, in a FF image, the peak gradient intensity can be as high as 80 pixel−1, whereas for the NF, it is 25 pixel−1. Meanwhile, a noise image can have a larger total pixel intensity 1.1 × 106 but a smooth gradient 20 pixel−1. Multi-spot images feature characteristics that are rather closer to the NF profiles.

Neural networks (NNs) are defined by layers of nodes ordered similarly to neurons in animal brains. ML with NNs is fundamentally a process of utilizing sets of these nodes, which can learn and improve in time from input data. These NNs abide by pre-set mathematical functions to achieve an understanding of their world and then to rapidly perform complex operations applicable in the real world. A NN has three distinct types of layers: input, hidden, and output. A Convolutional Neural Network (CNN) adds convolutions to ANNs, which are sophisticated mathematical filters used to decipher more complex images, e.g., those containing different categories of objects or images (colored) with multiple channels instead of a single one as is the case for grayscale images. A Deep Neural Network (DNN) is essentially any form of the previously mentioned NNs that has at least three hidden layers.

In the followings, we use three types of ML tools: a supervised binary classifier based on a simple ANN, an unsupervised autoencoder to classify the images in classes, and two pre-trained CNNs on which we implement transfer learning.

1. Binary classification using a basic ANN

In our binary image classifier, the ANN has three layers. In the first (input) layer, all images are flattened to 200 × 200 pixel resolution. Then, the first dense layer has 128 fully connected neurons and uses the ReLU activation function (f(x) = max(0, x)) to convert all negative pixel values to 0. Subsequently, in the second dense layer (third of the network), the sigmoid activation function is used to output probability prediction values between 0 and 1. Less than or greater than a pre-set value of 0.5 yields a NF or a FF classification, respectively. The sigmoid output unit is defined by y = σ(w × h + b), where σ(x) = 1/(1 + ex), w is the weight of the output layer, b is the bias, h is the last hidden layer, and x is the variable taken from the input of previous neurons. During training, 5.12 million parameters are identified and tuned, including the values of weights and biases.

The ANN is running TensorFlow v3.11 on a GPU NVIDIA RTX 3070 in the Python 3.10 environment. First, splitting the datasets into separate training and test sets is required in order to effectively utilize the trained machine learning model. In this case, 60% of the entire dataset of over 14 000 images is used for the purpose of training the ANN, 30% is used as a validation set, and 10% is ultimately used to test the outcome of the network. The training and validation images were organized in two labeled classes, NF and FF, which included noise and multi-spot images as well.

2. Classification using deep NNs

Unsupervised Learning Using an Autoencoder: Here, we first employ unsupervised learning implemented on a simple DNN to predict one of the three possible classes: NF, FF, and multi-spot images. We neglect the “noise” images as their number is in a small number. This choice is motivated by the search of a rapid classifier of the majority of data samples. The input dataset is label-free, while the assignment to different classes is left to be inferred by the network. The DNN architecture shown in Fig. 3 is composed of two encoder layers and two decoder layers using the linear activation function (f(x) = x) and three output layers with activation functions in the following order: “Linear,” “ReLU,” and “Softmax,” respectively. The stochastic gradient descent method (SGD) was chosen together with a learning rate set to 8 × 10−5 in order to smooth out and accelerate the learning process.

FIG. 3.

Convolutional autoencoder architecture.

FIG. 3.

Convolutional autoencoder architecture.

Close modal

The “Image Data Generator” function from the TensorFlow Keras library was utilized for preprocessing the images. The pixels’ values were all re-scaled to the range [0 255] while keeping the zoom unitary and divided by the standard deviation of the full dataset. Normalizing features in training can help gradient descent converge more quickly toward minima. Training was performed with the loss function “Sparse Categorical Cross-entropy”, and metrics given by the “Sparse Categorical Accuracy” function.

The Softmax function S(yi)=eyi/j=13eyj returns an unambiguous relationship in the output layer, as shown in Fig. 3. This layer reveals the possibility of an image belonging to one of three classes. To convert the output into an applicable format, we utilize a machine learning convention known as “One-Hot Encoding” in which we set all true labels to 1 and all remaining labels to 0.

Supervised Learning Using CNNs: In the second part, we employ two well-known deep CNNs, ResNet-18 and GoogLeNet. ResNet-18 is a deep CNN with 18 layers, while the GoogLeNet architecture is based on 22 layers. The choice of these CNNs is motivated by their relatively low number of deep layers resulting in a higher computational efficiency. In addition, both CNNs were successfully used for image classification with very high accuracy rates.33,34 The two CNNs were pretrained on the ImageNet dataset having 1000 classes.35 For our tasks, we adapted them by changing in the last fully connected layer the number of output classes to 4 (NF, FF, multi-spot, or wedge and noise) while keeping their initial weights and biases and performed transfer learning by tuning the learning rate and batch size.

Our ANN for supervised binary classification performed well, separating NF from the FF images. The training was fast, in a few seconds. The input data consisted in two labeled sets, FF and NF, which included 7236 and 5896 images, respectively, randomly interspersed with noise and multiple-spot beam profiles. A set of 1469 images has been tested following training. The confusion matrix is shown in Fig. 4, which shows the number of false predictions. These are given mainly by the noise and multiple-spot images, an example of the classification output being shown in Fig. 5.

FIG. 4.

Confusion matrix display for binary classification in FF and NF.

FIG. 4.

Confusion matrix display for binary classification in FF and NF.

Close modal
FIG. 5.

Binary classification on an unseen dataset. The images are labeled by the ANN output. The multi-spot top (left) image is assimilated as NF.

FIG. 5.

Binary classification on an unseen dataset. The images are labeled by the ANN output. The multi-spot top (left) image is assimilated as NF.

Close modal

1. DNN based on a custom autoencoder

The DNN was designed with an autoencoder optimized for our specific task. We required unsupervised learning to predict three classes: “near-field,” “far-field,” and “wedge.” The best accuracy was 84.9% tested with 1459 images after training the network for about a minute. In this case, all images were resized to 250 × 250 pixels, to maintain consistency throughout training, testing, and validation phases. The batch sizes for training were set to 350, and the batch sizes for testing were set to 175. The confusion matrix in Fig. 6 shows that most false predictions are linked to the multi-spot images, which are predicted as NF or FF.

FIG. 6.

Confusion matrix of the autoencoder.

FIG. 6.

Confusion matrix of the autoencoder.

Close modal

A dependence of the accuracy with the input size image is shown in Table I. The lowest score is achieved when the input images are downsized, while for roughly their original size the value reaches 81.22%. Downsizing even more the resolution of the input images affects the detection of FF, which becomes too small in terms of pixel size to be perceived as a classifiable object.

TABLE I.

Accuracy of the autoencoder for different image pre-processing consisting in resizing the image resolution.

TypeMethodResized inputAccuracy (%)
Unsupervised Autoencoder DNN 250 × 250 84.9 
Unsupervised Autoencoder DNN 100 × 100 81.2 
Unsupervised Autoencoder DNN 75 × 75 78.0 
Unsupervised Autoencoder DNN 50 × 50 65.0 
TypeMethodResized inputAccuracy (%)
Unsupervised Autoencoder DNN 250 × 250 84.9 
Unsupervised Autoencoder DNN 100 × 100 81.2 
Unsupervised Autoencoder DNN 75 × 75 78.0 
Unsupervised Autoencoder DNN 50 × 50 65.0 

In Fig. 7, we show the prediction scores and classes labeled by the autoencoder. Values below 80% are considered “low confidence” and indicate a null result, i.e., a non-correlation to the model. For the correctly assigned beam profiles, the score is over 90%.

FIG. 7.

ML generated labels for three classes: NF, FF, and multi-spot (or “wedge” in terms of classification). Model prediction score is shown on each image and also the predicted class.

FIG. 7.

ML generated labels for three classes: NF, FF, and multi-spot (or “wedge” in terms of classification). Model prediction score is shown on each image and also the predicted class.

Close modal

2. ResNet-18 and GoogLeNet classification

The input images for the two pre-trained CNNs are required to have at least 224 × 224 pixels and three channels; therefore, all our datasets had to be resized since they contain images with only one channel at lower resolution. We proceeded in two ways: in the first case, the input data were all retained and kept identical across all three channels. In the second case, one channel persisted with its original data values, while the other two channels were set to arrays of zeros (i.e., two blank images). For ResNet-18, the code has been implemented based on the PyTorch package. The images were normalized with the means for each channel 0.485, 0.456, and 0.406, respectively, and with the standard deviation for each channel 0.229, 0.224, and 0.225, respectively. These values were used in pre-training the network and were kept similar in order to preserve the performance of the CNN. Other values can work well provided that the new means are large enough to avoid vanishing gradients during backpropagation between layers. For training, we utilized 8766 images, while for testing, we utilized 4383 images. The transfer learning used a SGD model optimizer, featuring a learning rate of 10−5 for a total cycle of 30 epochs and a batch size of 16. Learning was relatively fast as the accuracy jumped to over 90% after only a few epochs, for both types of modified datasets with three-channel images. Eventually, the accuracy topped at 99.9% after 27 min of processing time, at the end of the computing session. A graph of the learning process is shown in Fig. 8. All images were accurately classified in the four classes, with no accuracy changes between the two types of datasets.

FIG. 8.

ResNet-18 training and testing accuracy for the case of datasets incorporating identical data images across all three channels.

FIG. 8.

ResNet-18 training and testing accuracy for the case of datasets incorporating identical data images across all three channels.

Close modal

The GoogLeNet CNN has been implemented using the Deep Learning Toolbox of MATLAB.36 The dataset was composed of 10 223 images for training and 4382 images for validation. We set the number of batch images at 30, 50, and 100. The tested learning rates were 10−3, 10−4, and 10−5. We used the weights and biases of the network already pretrained with the ImageNet dataset.35 It appears that the most efficient classification reaches an accuracy of 99.8% in the fastest simulation, which lasts about 6 min 41 s and is obtained for a learning rate of 10−4. The same high accuracy was obtained in the two cases of modified datasets with three-channel images.

Supervised classification using two pre-trained CNNs generally gives the most accurate and reliable results; however, other ML training techniques are required by other tasks. It is also important to factor in the pre-processing requirements of any dataset for an optimum model performance. Most pre-trained CNNs have specific format requirements for data input. Training time varies by the NN and hardware implemented at a workstation. The training time needed by the binary classifier or the autoencoder is evidently shorter than that required by the two CNNs in our present study, i.e., less than a minute compared to several tens of minutes, but this is highly dependent on the computing power capability. At ELI-NP, a high-performance workstation equipped with H100 GPUs is becoming operational, which will lead to reduced computational time and a significant increase in processing power. Moreover, once training is achieved, new data can be classified through inference. Once the configuration of the laser system changes, e.g., new optics or detectors are inserted or removed and additional channels are monitored, retraining of the NN is advisable to include all classification instances.

We intend to use these models to aid the laser engineers during facility operation by providing an early warning on errors or misalignment of optical subsystems or systems. Beam profile classification will help filter relevant information presented to laser operators and prevent overloading of non-critical data, ensuring awareness of the laser system status.

Further development of the presented techniques will include more sophisticated operations such as correlations of the beam profiles with other beam parameters, e.g., position of the beams against fixed references, pulse duration, and focal spot size. In this scenario, for an efficient use of the diagnostic images provided by the CCDs, it is desirable to employ ML algorithms that can provide solutions in a timely manner, faster than at least the reaction time of the beam operators. A compromise between speed and accuracy seems as the most probable way of action.

In this work, we have implemented different neural network architectures on a common GPU for classification of NF and FF images of laser beam profiles recorded from multiple diagnostic CCD cameras. Included were also inherent “bad images” with noise or no beam profiles and images with multiple beam spots produced during the preparation phase of the laser system, which consisted in the alignment of optics. The images were recorded during two days of operation of the high power laser at ELI-NP while delivering beams with powers of 1 PW and 100 TW. The full input dataset consisted in slightly over 14 600 images.

The results show high classification accuracy when a supervised learning binary ANN is used in conjunction with datasets labeled in two classes by the operator. An unsupervised learning DNN with three output neurons classifies with 84.9% accuracy all images contained in the dataset: NF, FF, and multi-spots. When using two well-known pretrained CNNs, ResNet-18 and GoogLeNet, and four labeled classes, the classification accuracy is over 99.8%, basically identifying correctly all images. The processing time required by the two CNNS varied between 6 and 27 min, depending on the learning rate and batch size. Meanwhile, the training lasted for a few seconds on the binary classifier and less than a minute on the autoencoder.

The results are intended as a first demonstration of the use of machine learning algorithms in the operation of the high power laser system. Future implementation of a system based on artificial intelligence can, in principle, shorten the preparation and tuning time of the system and likely begin a cycle of exponential improvement in results obtained in experiments.

This work was supported by the PN 23 21 01 05 contract funded by the Romanian Ministry of Research, Innovation and Digitalization and the IOSIN 2023 funds for research infrastructures of national interest.

The authors have no conflicts to disclose.

Study conception and design: V.G., I.D., C.M.T.; data collection: I.D.; analysis and interpretation of results: V.G., I.D., B.D., D.G.G., E.S., C.M.T.; draft manuscript preparation: V.G., I.D., C.M.T. All authors reviewed the results and approved the final version of the manuscript.

V. Gaciu: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Project administration (equal); Software (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). I. Dăncuş: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). B. Diaconescu: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Validation (equal); Visualization (equal). D.G. Ghiţă: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Validation (equal); Visualization (equal). E. Sluşanschi: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Supervision (equal); Validation (equal); Visualization (equal). C.M. Ticoş: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Software (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are openly available in https://github.com/VladGaciu/beamProfilePublic reference number.32 

1.
S. A.
Gales
,
K. A.
Tanaka
,
D. L.
Balabanski
,
F.
Negoita
,
D.
Stutman
,
O.
Tesileanu
,
C. A.
Ur
,
D.
Ursescu
,
I.
Andrei
,
S.
Ataman
et al, “
The extreme light infrastructure—nuclear physics (ELI-NP) facility: New horizons in physics with 10 PW ultra-intense lasers and 20 MeV brilliant gamma beams
,”
Rep. Prog. Phys.
81
,
094301
(
2018
).
2.
K. A.
Tanaka
,
K. M.
Spohr
,
D. L.
Balabanski
,
S.
Balascuta
,
L.
Capponi
,
M. O.
Cernaianu
,
M.
Cuciuc
,
A.
Cucoanes
,
I.
Dancus
et al, “
Current status and highlights of the ELI-NP research program
,”
Matter Radiat. Extremes
5
,
024402
(
2020
).
3.
D.
Doria
,
M.
Cernaianu
,
P.
Ghenuche
,
D.
Stutman
,
K.
Tanaka
,
C.
Ticos
, and
C.
Ur
, “
Overview of ELI-NP status and laser commissioning experiments with 1 PW and 10 PW class-lasers
,”
J. Instrum.
15
,
C09053
(
2020
).
4.
A.
Chien
,
L.
Gao
,
S.
Zhang
,
H.
Ji
,
E. G.
Blackman
,
W.
Daughton
,
A.
Stanier
,
A.
Le
,
F.
Guo
,
R.
Follett
et al, “
Non-thermal electron acceleration from magnetically driven reconnection in a laboratory plasma
,”
Nat. Phys.
19
,
254
262
(
2023
).
5.
C. K.
Li
,
P.
Tzeferacos
,
D.
Lamb
,
G.
Gregori
,
P. A.
Norreys
,
M. J.
Rosenberg
,
R. K.
Follett
,
D. H.
Froula
et al, “
Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet
,”
Nat. Commun.
7
,
13081
(
2016
).
6.
C.
Radier
,
O.
Chalus
,
M.
Charbonneau
,
S.
Thambirajah
,
G.
Deschamps
,
S.
David
,
J.
Barbe
et al, “
10 PW peak power femtosecond laser pulses at ELI-NP
,”
High Power Laser Sci. Eng.
10
,
e21
(
2022
).
7.
I.
Dancus
,
G. V.
Cojocaru
,
R.
Schmelz
,
D.
Matei
,
L.
Vasescu
,
D.
Nistor
, and
A.
Talposi
, “
10 PW peak power laser at the Extreme Light Infrastructure- Nuclear Physics – status updates
,” in
Topical Meeting (TOM) 13- Advances and Applications of Optics and Photonics
, edited by
M. F.
Costa
,
M.
Flores-Arias
,
G.
Pauliat
, and
P.
Segonds
(
EDP Sciences - Web of Conferences
,
Porto, Portugal
,
2022
), Vol.
266
, p.
13008
.
8.
Y.
LeCun
,
Y.
Bengio
, and
G.
Hinton
, “
Deep learning
,”
Nature
521
,
436
(
2015
).
9.
K.
O’Shea
and
R.
Nash
, “
An introduction to convolutional neural networks
,” arXiv:1511.08458 (
2015
).
10.
I.
Goodfellow
,
Y.
Bengio
, and
A.
Courville
,
Deep Learning
(
MIT Press
,
2016
), https://www.deeplearningbook.org.
11.
R.
Anirudh
,
R.
Archibald
,
M. S.
Asif
,
M. M.
Becker
,
S.
Benkadda
,
P.-T.
Bremer
,
R. H. S.
Bude
,
C. S.
Chang
,
L.
Chen
,
R. M.
Churchill
et al, “
2022 review of data-driven plasma science
,”
IEEE Trans. Plasma Sci.
51
,
1750
(
2023
).
12.
H.
Lv
,
Y.
Guo
,
Z.-X.
Yang
,
C.
Ding
,
W.-H.
Cai
,
C.
You
, and
R.-B.
Jin
, “
Identification of diffracted vortex beams at different propagation distances using deep learning
,”
Front. Phys.
10
,
843932
(
2022
).
13.
L. R.
Hofer
,
M.
Krstajic
, and
R. P.
Smith
, “
Measuring laser beams with a neural network
,”
Appl. Opt.
61
,
1924
(
2022
).
14.
H.
Wang
,
C.
Liu
,
X.
He
,
X.
Pan
,
S.
Zhou
,
R.
Wu
, and
J.
Zhu
, “
Wavefront measurement techniques used in high power lasers
,”
High Power Laser Sci. Eng.
2
,
e25
(
2014
).
15.
M. Z.
Alom
,
A.
Awwal
,
R.
Lowe-Webb
, and
T. M.
Taha
, “
Optical beam classification using deep learning: A comparison with rule- and feature-based classification
,”
Proc. SPIE
10395
,
103950O
(
2017
).
16.
Z.
Li
,
X.
Ouyang
,
P.
Zhu
,
L.
Pan
,
H.
Yang
,
D.
Liu
,
J.
Zhu
, and
J.
Zhu
, “
Prediction of deep learning for spectrum-pulse width on petawatt laser
,”
Proc. SPIE
11434
,
1143406
(
2019
).
17.
S. I.
Herriot
,
T. C.
Galvin
,
B. M.
Ng
,
E. F.
Sistrunk
,
S.
Betts
,
C. W.
Siders
,
T. M.
Spinka
,
D.
Smith
,
S. S.
Talathi
,
W. H.
Williams
, and
C. L.
Haefner
, “
Deep learning for real-time modeling of high repetition rate, short pulse CPA laser amplifier
,” in
2019 Conference on Lasers and Electro-Optics (CLEO)
(
IEEE
,
San Jose, CA, USA
,
2019
), Vol.
SM4E.6
.
18.
M.
Stanfield
,
J.
Ott
,
C.
Gardner
et al, “
Real-time reconstruction of high energy, ultrafast laser pulses using deep learning
,”
Sci. Rep.
12
,
5299
(
2022
).
19.
V.
Gaciu
, “
Application of artificial intelligence in tuning femtosecond laser systems
,”
Bull. Am. Phys. Soc.
64
,
21
(
2019
) , https://meetings.aps.org/Meeting/NEF19/Session/D01.5.
20.
V.
Gaciu
,
I.
Dǎncuş
,
B.
Diaconescu
,
D. G.
Ghiţǎ
,
D.
Doria
,
E.
Slusanşchi
, and
C. M.
Ticoş
, “
Machine learning for beam profile classification in the operation of the ELI-NP high power laser
,” in
Laser and Plasma Accelerators Workshop LPAW 2023
,
2023
.
21.
A.
Döpp
,
C.
Eberle
,
S.
Howard
,
F.
Irshad
,
J.
Lin
, and
M.
Streeter
, “
Data-driven science and machine learning methods in laser-plasma physics
,”
High Power Laser Sci. Eng.
11
,
e55
(
2023
).
22.
H.
Ye
,
Y.
Gu
,
X.
Zhang
,
S.
Wang
,
F.
Tan
,
J.
Zhang
,
Y.
Yang
,
Y.
Yan
,
Y.
Wu
,
W.
Huang
, and
W.
Zhou
, “
Fast optimization for betatron radiation from laser wakefield acceleration based on Bayesian optimization
,”
Results Phys.
43
,
106116
(
2022
).
23.
R. J.
Shalloo
,
S. J. D.
Dann
,
J.-N.
Gruse
,
C. I. D.
Underwood
,
A. F.
Antoine
,
C.
Arran
,
M.
Backhouse
,
C. D.
Baird
,
M. D.
Balcazar
,
N.
Bourgeois
,
J. A.
Cardarelli
,
P.
Hatfield
,
J.
Kang
,
K.
Krushelnick
,
S. P. D.
Mangles
,
C. D.
Murphy
,
N.
Lu
,
J.
Osterhoff
,
K.
Põder
,
P. P.
Rajeev
,
C. P.
Ridgers
,
S.
Rozario
,
M. P.
Selwood
,
A. J.
Shahani
,
D. R.
Symes
,
A. G. R.
Thomas
,
C.
Thornton
,
Z.
Najmudin
, and
M. J. V.
Streeter
, “
Automation and control of laser wakefield accelerators using Bayesian optimization
,”
Nat. Commun.
11
,
6355
(
2020
).
24.
J.
Lin
,
Q.
Qian
,
J.
Murphy
,
A.
Hsu
,
A.
Hero
,
Y.
Ma
,
A. G. R.
Thomas
, and
K.
Krushelnick
, “
Beyond optimization—supervised learning applications in relativistic laser-plasma experiments
,”
Phys. Plasmas
28
,
083102
(
2021
).
25.
Y.
Sun
and
S.
Brockhauser
, “
Machine learning applied for spectra classification in X-ray free electron laser sciences
,”
Data Sci. J.
21
,
18
(
2022
).
26.
E.
Fol
,
R.
Tomás
,
J.
Coello de Portugal
, and
G.
Franchetti
, “
Detection of faulty beam position monitors using unsupervised learning
,”
Phys. Rev. Accel. Beams
23
,
102805
(
2020
).
27.
J.
Lin
,
F.
Haberstroh
,
S.
Karsch
, and
A.
Döpp
, “
Applications of object detection networks in high-power laser systems and experiments
,”
High Power Laser Sci. Eng.
11
,
e7
(
2023
).
28.
M. J.
Falato
,
B. T.
Wolfe
,
T. M.
Natan
,
X.
Zhang
,
R. S.
Marshall
,
Y.
Zhou
,
P. M.
Bellan
, and
Z.
Wang
, “
Plasma image classification using cosine similarity constrained convolutional neural network
,”
J. Plasma Phys.
88
,
895880603
(
2022
).
29.
Z.
Wang
,
J.
Xu
,
Y. E.
Kovach
,
B. T.
Wolfe
,
E.
Thomas
,
H.
Guo
,
J. E.
Foster
, and
H.-W.
Shen
, “
Microparticle cloud imaging and tracking for data-driven plasma science
,”
Phys. Plasmas
27
,
033703
(
2020
).
30.
D.
Strickland
and
G.
Mourou
, “
Compression of amplified chirped optical pulses
,”
Opt. Commun.
56
,
219
(
1985
).
31.
F.
Lureau
,
G.
Matras
,
O.
Chalus
,
C.
Derycke
,
T.
Morbieu
,
C.
Radier
,
O.
Casagrande
,
S.
Laux
,
S.
Ricaud
,
G.
Rey
,
A.
Pellegrina
,
C.
Richard
,
L.
Boudjemaa
,
C.
Simon-Boisson
,
A.
Baleanu
,
R.
Banici
,
A.
Gradinariu
,
C.
Caldararu
,
B. D.
Boisdeffre
,
P.
Ghenuche
,
A.
Naziru
,
G.
Kolliopoulos
,
L.
Neagu
,
R.
Dabu
,
I.
Dancus
, and
D.
Ursescu
, “
High-energy hybrid femtosecond laser system demonstrating 2 × 10 PW capability
,”
High Power Laser Sci. Eng.
8
,
e43
(
2020
).
32.
V.
Gaciu
(
2023
).
“beamprofilepublic,” GitHub
. https://github.com/VladGaciu/beamProfilePublic
33.
See https://image-net.org/challenges/LSVRC/2015/ for ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).
34.
See https://image-net.org/challenges/LSVRC/2014/ for ImageNet Large Scale Visual Recognition Challenge 2014 (ILSVRC2014).
35.
ImageNet, https://image-net.org/,
2014
.
36.
Deep Learning Toolbox, MATLAB, https://www.mathworks.com/products/deep-learning.html,
2023
.