Radiochromic film (RCF) stacks are the most commonly used diagnostic of laser accelerated ion beams at Gesellschaft für Schwerionenforschung, Darmstadt (GSI) and at other laboratories. So far, the evaluation of the stacks is performed using manual input for the deposited energy determination. This is usually a tedious task and introduces uncertainty in the resulting ion energy spectrum and also in the corresponding angular distribution. An automated procedure is especially important if larger data sets, containing multiple laser shots, are investigated. Here, we describe an automated procedure for the evaluation of digitized RCF stacks. RCF stacks obtained at GSI’s PHELIX laser system are evaluated as a test case. A validation of parts of the procedure is performed on generated input data.

## I. INTRODUCTION

Laser accelerated ions bear a great potential for many applications. Examples are compact medical accelerators,^{1–4} neutron sources,^{1,5–7} or as injector for conventional accelerators.^{8} All applications have in common that they usually require a reference ion beam energy with a maximum tolerable energy width and spatial beam size. Characterization of the laser source and its optimization toward improved performance for specific applications then demand high resolution measurements of the three-dimensional distribution of laser accelerated ion beams and hence underline their importance. Radiochromic films (RCFs)^{9} are the most important tools for this purpose. They have been in use for over two decades^{10} and are therefore well investigated. RCFs are and will likely be the most important tool for this due to the following reasons:

The first reason is their inherent safety from electromagnetic pulses, accompanying the laser plasma interaction during the ion acceleration process. This is due to a lack of electronic components.

The second reason is the possibility for absolute calibration of the RCFs that allows us to determine the absolute dose deposited. This enables a wide range of cross calibration application for other diagnostics.

The third reason is the capability to resolve the spatial and spectral distributions of the ion bunches. Up until now, no other diagnostic is known that can deliver the same amount of information about an experiment in a single shot. A detailed discussion of different ion detection methods and their advantages can be found in the referenced thesis.^{11}

Most applications require reproducible beams with high repetition rates. RCF based detectors are mainly assembled by hand and can be used for single shots only. For this reason, they are not practical as main diagnostic for high repetition rate experiments. However, many of the new detectors under development for high repetition rate laser ion acceleration experiments cannot be absolutely calibrated in advance. Therefore, they need to be cross calibrated using, e.g., RCFs.^{12–17}

In this contribution, a procedure for the automated evaluation and reconstruction of digitized RCFs is introduced. This procedure is implemented in Python and provided.^{18} It comprises an algorithm for the evaluation of RCFs, allowing the reproducible evaluation of data and its application to large data sets.

The present contribution focuses on beams with a quasi-exponential energy distribution, generated by the Target Normal Sheath Acceleration (TNSA) mechanism.^{19,20} TNSA is under scientific investigation for over two decades now^{10,21,22} and is, therefore, well investigated and bears a great potential for applications. This mechanism is still prominently investigated at most high power laser labs, including the PHELIX laser facility^{23,24} at GSI Helmholtzzentrum, Darmstadt.

The procedure presented here includes routines for the deconvolution of the beam energy distribution, accounting also for non-exponential distributions expected from other acceleration mechanisms. We apply this procedure to data obtained at the PHELIX laser system and simulated RCF stacks. The obtained energy dependent envelope divergence angle is fitted to the model by Lecz *et al.*,^{25} which allows us to estimate further experimental plasma parameters (confer to Sec. V).

## II. INTERACTION OF LASER ACCELERATED PARTICLES WITH RCF STACKS

When a RCF is irradiated, it undergoes a chemical process called polymerization, which leads to a change in the optical density (OD) of the RCF.^{26,27} The change in the OD correlates directly to the deposited energy from the particles passing the RCF into this specific RCF. The deposited energy in relation to the irradiated mass is also known as dose (*D*). Using the stopping power relations allows us to determine the initial particle energies and get an estimate for the corresponding number of particles. In our study, two types of RCFs are used, namely, GafChromic HD-v2 and GafChromic EBT3 from Ashland.^{28}

The energy spectrum of laser plasma accelerated ions is, in general, not monoenergetic. Measuring this profile can be performed if several individual RCFs are combined into one RCF stack as indicated in Fig. 3(a). This configuration allows us to measure the dose in dependence of the particle’s penetration depth and to calculate the corresponding number of particles. When measuring TNSA ion beams, it is important to take into account the dose contribution on the RCF from all impinging radiation. TNSA beams consist of several ion species and electrons while additional electrons and gamma radiation are created during the laser–plasma interaction. Different types of radiation cause different changes in the OD of the individual RCF layer; the total OD can be expressed as a superposition of these contributions,

where OD_{film} is the original OD before irradiation, OD_{ion} is the effect caused by ion irradiation, OD_{γ} is the change caused by gamma radiation, and OD_{electron} is the change induced by electrons. A consistent evaluation is based on finding the area of the RCF, where the ions deposit energy. Ions deposit more energy inside a RCF than electrons or gamma radiation. Therefore, the area with the highest level of OD is of relevance.^{21,29} Electrons and gamma radiation induce a parasitic signal, which needs to be filtered out.

The electrons are separated into two parts: The first part has a high energy and consists of the electrons leaving the target before the charge separation is built up. They are capable of penetrating the full stack, and their signal can be seen on EBT3 layers, which have a higher sensitivity to any kind of ionizing radiation than HD-v2 layers. The second part consists of electrons co-moving with the accelerated ions. There velocities are same to the ion distribution, and they are stopped in the first passive layer, therefore not contributing to the measured signal. The gamma radiation is assumed to produce a homogeneously distributed signal on the specific layer. The angular distribution of the photons^{30–34} has two characteristic intensity peaks at relatively large angles. In between these angles, where the RCFs are located, the intensity of gamma rays is relatively low and comparably constant when projected on the RCFs.

Concluding the shapes for the OD components, the gamma component comprised a constant offset, while the electron component has a low intensity while having a finite spatial distribution. The ion component has a high intensity with a finite spatial distribution.

A further remark on the differences that are visible in Fig. 3 and, for example, in the work of Wagner *et al.*,^{35} Fig. 4(b) due to a decrease in energy deposition from ions and a lower number of particles, the contribution of the ion part gets less dominant the further they penetrate the stack. The electron contribution from the previously mentioned high energy part stays with a comparatively similar size and intensity, becoming measurable and more dominant on the later high sensitivity films.

One essential step for the evaluation is to separate the ion signal from the parasitic contributions. The current approach for filtering the measurement data relies rather on an educated guess than a systematic method. Based on the investigated radiation type, scientists select the irradiated area on the RCFs by eye. The manual selection might lead to not correctly selected doses and non-reproducible results, biasing the result. Furthermore, it increases the evaluation time by a significant proportion. If the aforementioned discrimination is not performed properly, even fundamental properties of the spectrum, e.g., the cut-off energy, might change significantly due to a mismatch in the stack’s stopping power relations.

The user’s choice introduces a measurable bias in the evaluation results. For this study, the manually selected ion areas, defined by several scientists, were evaluated and the deposited energy in MeV for each pixel was calculated. A more detailed explanation of this study is given in Appendix D. Examples for this are the histogram combinations displayed as violin plots in Fig. 1.

Two distributions are identical if the irradiation areas chosen by the users are the same. Four different scientists were given the exact same data set, which was then used to create the area of the ion beam according to the individual’s best practices. The displayed distributions vary since the selected areas are not the same, resulting in different dose values and opening angles.

The dose from the ions is the core quantity for the calculation of the energy spectrum. A spectrum is verified by matching a calculated dose, created by the spectrum, with the measured dose. If there are deviations in the calibration or in the response function, e.g., introduced by manually selected irradiation areas, the dose varies. Hence, the resulting fit parameters for the spectrum will vary as well. Therefore, a consistent and reproducible determination of the area irradiated by the ion beam is of utmost importance. The solution to this issue is a new approach based on computer vision methods and the inherent properties of the RCFs.

## III. DISCRIMINATION OF THE RADIATION TYPES VIA AUTOMATIC SEGMENTATION

We use digital image processing techniques to identify the 2D areas (called segment^{36}) on the RCFs, which belong to a specific type of radiation. These methods are implemented in the provided tool.^{18} In order to ensure reproducibility, our newly developed algorithm uses fixed and quantifiable conditions to filter the measurement data. The algorithm for the determination of the segmentation boundary is displayed in Fig. 2.Two types of automatic segmentation are implemented; one based on the Hue-Saturation-Value (HSV) representation of the color space and one based on the converted deposited energy (MeV map) of the corresponding RCF.

While the definition of colors using the Red-Green-Blue (RGB)-triple is well known, HSV for the representation of color is less intuitive but useful. RGB is the Cartesian representation of the color space, and HSV is a representation of the color space using a cylindrical coordinate system spanned by H:hue, S:saturation, and V:value.^{36} The HSV branch of the algorithm converts the scanned image from the regular RGB color space into the HSV space and works on the resulting hue component. RCFs undergo a rapid change in the OD if irradiated. Converting the OD image to grayscale and applying the HSV parametrization yields that the hue is the relevant grayscale parameter. A visualization of this can be seen on the left side of Fig. 2. The irradiated area has a light shade in this representation, while the non irradiated area is dark. Taking a histogram for the hue allows us to separate the black background easily from the rest of the RCF. This allows us to create a binary mask to separate background from the signal. The method using HSV and more specifically the hue value is more efficient than relying on RGB coded images since only one component has to be taken into account. If different amounts of radiations are measured, then, different color value changes in the RGB space have to be compared; the projection to HSV reduces this to one component; the hue component is displayed in Fig. 2. Segmenting this hue representation is then easily possible with the mentioned histogram method. This was also investigated in the study from Appendix D.

The MeV segmentation branch, on the other hand, converts the image directly to the corresponding deposited energies and then works on the resulting MeV map. This MeV map is generated by applying the calibration curves ( Appendix C) to the image.

The map denotes the deposited energy for each pixel in this case given in MeV, while the scale in MeV is an arbitrary selection. The conversion from the RGB to MeV scale is given by the calibration in Appendix C. The segmentation method for this is, again, the threshold segmentation method as in the HSV segmentation.

Both segmentation variants work rather similarly and use the inherent variation of the RCFs OD change after irradiation. Part of the algorithm is the condition for finding the corresponding thresholds. Finding absolute conditions for the determination of the proper segmentation threshold in computer vision is tedious, complex, and not always possible. The default values we found are presented in Appendix A; for HD layers, where a beam shape is still visible, the first peak can be cut; for EBT layers, everything except the last peak can be cut. These conditions can be set and varied for the specific evaluation such that dedicated tests of the data can be performed and flexibility is ensured. These fixed global parameters, on the one hand, ensure reproducibility across users and experiments and, on the other hand, allow for a high degree of automatization for data driven modeling and subsequent applications.

HSV segmentation as well as MeV segmentation is capable of creating fairly similar results. The MeV segmentation sometimes overestimates the threshold, which is especially the case for HD-v2 layers, irradiated with rather low intensities. An example for these type of layers can be seen in Fig. 3 for the 14.3 and 15.6 MeV layers. It is therefore advisable to stick to the HSV segmentation method for all layers and apply MeV segmentation as cross-check for EBT3 layers if needed.

Evaluating experimental data results in the step by step outputs in Fig. 3. After the segmentation, the evaluation can be continued with the convolutional method introduced by Nürnberg *et al.*^{37} or a new method without an *a priori* assumed spectrum proposed in this publication. Both methods are discussed in Sec. IV B.

The results of the segmentation are displayed in Figs. 2 and 3. Combining Figs. 3(b) and 3(d) allows a comparison between the actual image, its calculated binary mask, and the deposited energy map. The colors indicate different energy levels normalized to the maximum dose value stored in each individual layer.

The correctness of the segmentation is shown by example evaluating a simulated RCF stack. The simulated RCF stack was created by assuming the spectral parameters given in Tables I and II. This parameters were used to create ideal TNSA bunches, which could then be passed to SRIM in order to calculate the deposited energy.^{38} Conversion to pictures is then performed by the already existing calibration functions from Appendix C.

Parameter . | Defined . | Reconstructed . | Unit . |
---|---|---|---|

a | 28.19 | 28.83 ± 0.50 | deg |

b | −0.23 | −0.29 ± 0.09 | deg/MeV |

c | −0.020 | −0.016 ± 0.003 | deg/MeV^{2} |

Parameter . | Defined . | Reconstructed . | Unit . |
---|---|---|---|

a | 28.19 | 28.83 ± 0.50 | deg |

b | −0.23 | −0.29 ± 0.09 | deg/MeV |

c | −0.020 | −0.016 ± 0.003 | deg/MeV^{2} |

Exponential spectrum . | ||||
---|---|---|---|---|

. | . | Reconstructed . | . | |

Parameter . | Defined . | 14 layers . | 7 layers . | Unit . |

N_{0} | 1.36 | 1.16 ± 0.04 | 1.18 ± 0.33 | 10^{12} |

k_{B}T | 6.37 | 7.11 ± 0.20 | 8.86 ± 2.17 | MeV |

Exponential spectrum . | ||||
---|---|---|---|---|

. | . | Reconstructed . | . | |

Parameter . | Defined . | 14 layers . | 7 layers . | Unit . |

N_{0} | 1.36 | 1.16 ± 0.04 | 1.18 ± 0.33 | 10^{12} |

k_{B}T | 6.37 | 7.11 ± 0.20 | 8.86 ± 2.17 | MeV |

Step spectrum . | ||||
---|---|---|---|---|

. | . | Reconstructed . | . | |

Parameter . | Defined . | 14 layers . | 7 layers . | Unit . |

N_{0} | 1.36 | 2.5 | 2 | 10^{12} |

E_{0} | 20.00 | 18.25 | 19.5 | MeV |

ΔE | 6.00 | 6.55 | 9.5 | MeV |

Step spectrum . | ||||
---|---|---|---|---|

. | . | Reconstructed . | . | |

Parameter . | Defined . | 14 layers . | 7 layers . | Unit . |

N_{0} | 1.36 | 2.5 | 2 | 10^{12} |

E_{0} | 20.00 | 18.25 | 19.5 | MeV |

ΔE | 6.00 | 6.55 | 9.5 | MeV |

## IV. AUTOMATIZED ION BEAM EVALUATION

### A. Determination of the ion beam spatial distribution

The spatial distribution of the particle beam is one of the two intermediate results needed to reproduce the full beam. RCFs allow us to measure the size of the beam at the position of the individual layer by taking the relevant OD change into account. Since the OD change in the layer can only properly indicate the outer radius of the beam, additional models have to be assumed for the behavior inside the beam:

We assume a laminar beam, which is drifting without internal or external forces. This assumption is justified at sufficiently large distances from the target.^{25,37,39,40} For a laminar beam, the opening angle provides information about the transverse and longitudinal momenta,

where *p* indicates the particle’s transverse or longitudinal momentum, respectively.

The laminar flow and the varying size on the RCFs imply the shape of an opening cone for each energy value. The origin of this opening cone constitutes a virtual source, which, due to reconstruction, is supposed to be in front of the laser irradiated target’s side.^{37} The opening angle of the beam can be experimentally evaluated by fitting the shape of a circle or an ellipsis to the aforementioned found contour for the ion signal on the individual RCF layer and applying the relevant trigonometric functions.

If a circle is fitted—as performed in the course of this article—its radius can be given as the mean radius with the corresponding standard deviation. If an ellipsis is fitted, this method can result in two different opening angles for both transverse components. According to theory,^{19} the TNSA opening angle is an ideal circle, which might deviate due to experimental uncertainty, misalignment, laser aberrations, or the specific target structure.

The trigonometric relationships allow to calculate a linear opening angle by taking the distance from the target to the individual RCF layer and the radii of the beams on these layers into account. These angle values can then be fitted with several methods for a proper interpolation of the full spatial spectrum.

As mentioned above, an ideally circular distribution on the RCFs allows us to determine an effective radius of the distribution. This radius is the half opening angle of the accelerated particles; its error is determined as the standard deviation of the contour from the mean radius. The experimental data for the half opening angle from the source shot in Fig. 3 is displayed in Fig. 8 together with fits described in Appendix B 2.

Figure 4 shows simulated RCFs for an ideal conic signal, and Fig. 5 visualizes the corresponding angle fit with data extracted from the simulated stack. The fit results are compared to the simulation in Table I. The data show that the reproduction is close to the given simulation, which validates the automatic segmentation approach for the divergence calculation.

The importance of the laminar flow model is shown when going from the determined envelope angle toward the reconstruction of the beam’s particle distribution function in the transversal components. This reconstruction assumes that the particles are homogeneously distributed on the cutting plane with the RCF layer. Our laminar beam model corresponds to the equations given in Appendix B 2 and implies that for each energy a fixed cone exists. The half envelope angle describes the boundary for this cone.

### B. Determination of the ion beam energy spectrum

The previously calculated energy dependent divergence angle has to be complemented with the energy dependent particle counts. Both are needed to reconstruct the 3D particle distribution. In order to calculate the spectrum, the deposited energy in the individual RCF layer has to be matched to the dose. The ideal case would be a direct deconvolution inverting

where *E*_{dep} is the deposited energy, *E*_{loss}(*E*) is the energy loss characteristic of the individual RCF, and d*N*/d*E* is the energy distribution of the ion beam.

However, the resulting equation system (3) is under determined due to the limited number of active layers in a stack. These active layers are sandwiched by several passive layers as, for example, depicted in Fig. 3(a). Charged particles continuously deposit energy while passing matter. Lower energy particles have a shorter penetration depth than higher energy particles. The higher energy particles deposit energy in all layers before they reach their corresponding Bragg peak, resulting in an ambiguity in earlier layers. Only the active layers provide sampling points and information is missing for the passive layers. Increasing the number of active layers increases the preparation time needed and the accumulating costs of the diagnostic as well as the time needed for scanning the resulting layers. A proper inversion, for example, with the GRAVEL method,^{41} is therefore not possible due to under determination caused by the ambiguity.

This sparse data issue can be overcome by applying model assumptions of the dose in the passive layers. The first approach displayed here works with the assumption of a strong model, which forces the resulting spectrum into a given distribution. This is called the convolutional approach (Sec. IV B 1^{37}). The second approach is basically model free and only assumes a linear interpolation between the data points. This is called the linear interpolation approach (Sec. IV B 2).

Further methods are, for example, of the Graphical Subtraction Method (GSM) mentioned in Ref. 37 and the Image Merging Method (IMM) mentioned in Ref. 40. These methods are not taken into consideration since they either lack numerical stability and assume strong models (GSM) or require a large amount of user inputs not suitable for automatic application (IMM).

#### 1. Convolutional approach

The main approach chosen to deal with the low number of sampling points resulting in the ambiguities mentioned above is using a convolution approach as described in more details in the corresponding publication.^{37} For this approach to work, a model for the particle spectra has to be assumed, which is in good agreement for conventional thin film TNSA targets, an exponential function, e.g.,

This exponential function is convoluted with the RCF response function given by an *a priori* SRIM^{38} or Monte Carlo calculation. The function’s parameters are then varied in a least squares sense, until they reproduce the RCF deposited energy values.

Due to the nature of the approach, several different models and arbitrary functions could be used, which allows for an easy testing of possible new theories as soon as they emerge. It has also been shown by Wagner *et al.*^{42} that this works even if different models are superposed by adding up different functions,

While this convolutional approach is numerically stable, it can only reproduce the model function given. The lower the uncertainty of the result, the higher the possibility that the assumed spectrum matches the observation. On the other hand, it does not provide information about how a deviating spectrum looks like. A result for this convolutional approach is given in Fig. 6.

#### 2. Linear interpolation approach

In this section, we report an additional deconvolution method that can be used to obtain the energy spectrum from the RCF data without making any assumptions about the distribution of the spectrum. This approach follows the same idea as the method by Schollmeier *et al.* in Sec. 3.A. Since the implementation is not given in detail, it is not clear to what extent the two algorithms are identical and whether the approach of Schollmeier *et al.* can also be used for unconventional ion spectra. The main difference is therefore the open source implementation of this algorithm and the dedicated goal to investigate unconventional spectra. The algorithm presented here already takes the discretization of the data into account; the equations therefore differ from the ones given by Schollmeier *et al.*

Important to note is that the spectrum is assumed to be continuous, which is achieved by a linear interpolation between the sampling points. However, this assumption does not limit the shape of the resulting spectrum in any way. Each irradiated active layer has a specific Bragg peak energy, which means that the particles with this kinetic energy are stopped and deposit a significant part *E*_{Bragg} of their initial energy in this layer. The idea is therefore to take this Bragg peak energy as the sampling position in the energy spectrum, ranging from the lowest resolved Bragg peak energy *E*_{min} to the highest resolved Bragg peak energy *E*_{max}. The sampling points *E*_{i} are therefore part of the corresponding closed interval. To do this, one also has to take into account that higher energy particles do not get stopped in this layer but still deposit energy.

For the analysis of RCF stacks, only the layers that actually got irradiated are taken into account. The last layer irradiated then defines the maximum energy value for the interval, and it is reasonable to assume that no higher energy component was measured. One can therefore expect that the particle count (*N*) on the last layer is given by the ratio of the measured deposited energy in the last layer to the particle Bragg peak energy in the last layer only,

After completion of the first step, the stack is evaluated with the same method from back to the front. The active layer under discussion is denoted with an integer *i*. The steps for this algorithm are mentioned as follows:

- he first step is always to determine an initial particle number for each layer. This is performed by dividing the full deposited energy by the Bragg peak energy,At this point, the particle count at(7)$Nmax(Ei)=Edepmeasured(Ei)/EpartBragg(Ei).$
*E*_{i}is calculated only with the Bragg Peak energy at*E*_{i}taken into account. No contributions from particles with an energy higher than*E*_{i}are included yet. If now the energies*E*>*E*_{i}are included, their contributions add up and the calculated deposited energy gets higher than the measured energy, effectively overestimating the particle count at*E*_{i}. The interpolation of the particle count is then performed by a simple linear interpolation from two adjacent data points at*E*_{i}and*E*_{i+1}, allowing for subsequent corrections as discussed in the next step. To deal with the overestimation of particles at

*E*_{i}, the particle count $NEi$ has to be varied until the different deposited energies match, starting from the last measured irradiated layer going to the front:- Calculate the contribution of the layers with
*E*_{j}>*E*_{i}to the layer with*E*_{i}, and subtract it from the deposited energy,The new calculated deposited energy is then used to calculate a more precise particle number(8)$Edepcalc(Ei)=Edepmeasured(Ei)\u2212\u222bEjEmaxEparti(j).$*N*(*E*_{i}). Equation (8) subtracts every higher energy component except the part from

*E*_{i}to*E*_{i+1}. The particle count is therefore still overestimated. The number of particles*N*(*E*_{i}) is reduced until the calculated deposited energy including the area from*E*_{i}to*E*_{i+1}matches the assumed spectrum. The previously not included area is linearly interpolated in between the two energies.If the energies are still not matching when

*N*(*E*_{i}) = 0 (this can occur if there are too many particles in the triangle formed by the linear interpolation between the sampling points*N*(*E*_{i}) and*N*(*E*_{i+1})), an additional sampling point*E*_{v}is inserted between*E*_{i}and*E*_{i+1}with*N*(*E*_{v}) = 0 in this case. This additional sampling point in the energy domain is increased until the measured and the calculated deposited energies match. With this mechanism, it is also possible to increase the spectrum’s cut-off energy through setting a virtual sampling point at a higher energy.

The final spectrum reproduces the measured deposited energy in the RCF stack precisely to an allowed uncertainty, which can be fixed *a priori*. An ambiguity still exists since the spectrum determined is only a representative of all possible solutions reproducing the deposited energy. Keeping the input parameters, cut-off energy and maximum dose deviation constant, ensures a reproducible evaluation of the given data. Furthermore, additional diagnostics might be used to verify the shape of the particle spectrum and to reduce the number of ambiguities. This ambiguity in the spectrum can also be reduced by applying theoretical models, which then defaults to the convolutional approach explained in Sec. IV B 1. In other words, this method uses measured values without considering any measurement errors. The spectrum obtained in such way is not exactly representing an exponential regression similar to the measuring points.

The energy distribution by the linear method is more susceptible to measurement errors than the convolutional approach, as can be seen in Fig. 6. This is caused by the nature of the different approaches. The linear approach works on the measured deposited energy directly and is therefore susceptible to the fluctuations measured from each data point that directly converts into the displayed fluctuations. The convolutional approach, on the other hand, implicitly fulfills a regression on the data and is therefore smooth. Less variation in the data results in less variation in the resulting spectra, which can be seen in Fig. 7(a).

The spectra also have dedicated features at the lower and higher energy end: The linear approach cannot extrapolate beyond the lower end, where no signal could be measured. The spectrum behavior before the first active layer cannot be determined, the algorithm assigns a constant value, but it has no deeper physical meaning. On the higher energy part, a more consistent statement for the cut-off energy can be found, a small extrapolation is possible. The convolutional approach, on the other hand, allows extrapolation following the given model function but is problematic finding a proper cut-off energy such that additional information have to be put into the full representation, e.g., from the geometry of the used stack.

### C. Energy spectrum verification with simulated RCF stacks

The second part of the algorithm, namely, the deconvolution of the spectrum, is also validated. This is achieved by comparing the algorithm’s output to well-defined input spectra. The input spectra are defined by two distribution functions as follows: The exponential function is given as

while the stepwise function is displayed in Eq. (10) as

The chosen parameters and the evaluation results are given in Table II. The results show that both implemented algorithms are capable of reproducing the defined spectra as presented in Fig. 7(a). The uncertainties are inherent to the dosimetry approach, which can be counteracted in the experiment by using more active layers and adjusting passive layers to prevent saturated signals. The deconvolution presented in Fig. 7(b) shows a step function as proton distribution. Due to its shape, the conventional convolutional method does not achieve a meaningful conclusion if these functional dependencies are not inserted. The linear deconvolution method reaches the displayed solution. One can see that the plateau varies but is still of the same order of magnitude and that the two steps are drifted apart. This drift is caused by the lack of measurement precision since only the active layers in the stack give sampling points. The method is therefore not precise enough to fix the step’s sharp boundaries.

One important restriction has to be addressed although: All evaluation methods are based on matching a calculated deposited energy from the fitted spectrum to the measured deposited energy. This means that a spectrum is a valid solution if it reproduces the measured dose distributions. Thus, each particle distribution reproducing the dose distribution is a valid result. The results can still vary within the given model/method uncertainties. The final spectrum resulting from the evaluation methods (confer to Sec. IV B) therefore always gives a representative of all possible solutions. Using the same evaluation conditions and the same algorithm ensures that comparable representatives are returned. The conclusion is that the algorithmic approach ensures reproducibility of the evaluation inside the methodological constraints.

As already mentioned in Sec. II, the different radiation components influence the results of the evaluation, and an exact differentiation of the radiation types is only possible to a certain extent.

## V. COMPARISON WITH THE ANALYTICAL MODEL BY LECZ *et al.*

The previous sections have shown that our segmentation scheme can isolate the ion signal on the RCFs and thereby reconstruct the beam energy dependent divergence angle. In addition, we can reconstruct the beam energy distribution.

As an application, we discuss the comparison to the analytical model by Lecz *et al.*^{25} for the transverse divergence of TNSA beams from solid targets. This model is a 2D extension of Mora’s plasma expansion^{45} and takes several experimental parameters and assumptions into account. Unknown parameters of the model can then be fitted to the reconstructed divergence angle data via a least squares approach and result in physical predictions of the cut-off energy, the hot electron spread angle inside the target, and the hot electron density. Results are displayed in Table III.

Physical quantity . | Source shot . |
---|---|

Pulse duration (fs) | 650 |

Normalized vector potential a_{0} | 11.46 |

Irradiation size FWHM (μm) | 3.5 |

Target thickness (μm) | 10 |

Hot electron temperature (MeV) | 1.9 |

Fit results: | |

Electron spread angle ^{(◦)} | 46 |

Hot electron density (m^{−3}) | 0.79 × 10^{26} |

Additional result: | |

Cut-off energy (MeV) | 26.44 |

Physical quantity . | Source shot . |
---|---|

Pulse duration (fs) | 650 |

Normalized vector potential a_{0} | 11.46 |

Irradiation size FWHM (μm) | 3.5 |

Target thickness (μm) | 10 |

Hot electron temperature (MeV) | 1.9 |

Fit results: | |

Electron spread angle ^{(◦)} | 46 |

Hot electron density (m^{−3}) | 0.79 × 10^{26} |

Additional result: | |

Cut-off energy (MeV) | 26.44 |

The empirical models from Sec. IV A do not have a supporting theoretical model but apparently reproduce the measured results over a larger energy range (see Fig. 8). The model by Lecz *et al.* has a good agreement for the high energy part (here for *E* > 18 MeV) and also the capability to fit the behavior of the envelope at the top part of the spectrum. The deviation in the low energy (here for *E* < 18 MeV) part is caused by its theoretical assumptions, as mentioned in the corresponding publication. The threshold of 18 MeV is determined for this data set by comparing the differences of the fitted curve to the experimental data and varies according to the specific experimental parameters.

Important to note is that the model by Lecz *et al.* is capable of predicting the ion cut-off energy of the spectrum. This 2D model, taking the ion opening angle into account, can model the angle opening angle above the highest energy until the curve reaches 0. The model is useful to extrapolate the fit until it hits the axis of abscissa. The energy value at the intersection of fit and the axis of abscissa is then the value of the cut-off energy. As can be seen in Fig. 8, the model yields a smaller cut-off energy than the partial function. The discrepancy might be a result from some of the conditions assumed by Lecz and can therefore vary.

## VI. CONCLUSION

We present a fully automatic evaluation procedure of RCF stacks, which provides reproducible and user independent results for the ion beam distribution. Our segmentation approach does not require manual inputs, enabling the application of the segmentation systematically on laser shot collections.

With the presented scheme, arbitrary energy distributions can be reconstructed with an uncertainty related to the uncertainty of the registered dose. We also verified that the linear interpolation approach reproduces predefined input distributions.

Results created by the evaluation can be passed to further analysis and beam transport codes via the models and the corresponding fit parameters used. The used data types and the file structures are presented in the documentation for the corresponding repository.^{18}

## ACKNOWLEDGMENTS

This work was funded by HMWK through the LOEWE center “Nuclear Photonics.”

The results presented here are based on the experiment P218, which was performed at the PHELIX facility at the GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany) in the frame of FAIR Phase-0.

A special thanks is to Barbara Endl for her help and work on the Python implementation of the pyRES code. The authors would also like to thank Christian Brabetz, Yun Ouedraogo, and Paul Neumayer for fruitful discussions.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts of interest to disclose.

### Author Contributions

**Benedikt Schmitz**: Conceptualization (lead); Formal analysis (equal); Validation (equal); Visualization (equal); Writing – original draft (lead); Writing – review & editing (lead). **Martin Metternich**: Data curation (equal); Methodology (equal); Software (supporting); Validation (equal); Writing – review & editing (equal). **Oliver Boine-Frankenheim**: Funding acquisition (lead); Project administration (lead); Supervision (lead).

## DATA AVAILABILITY

The data that support the findings of this study are available from the corresponding author upon reasonable request.

### APPENDIX A: ALGORITHM DETAILS

This section illustrates both branches of the segmentation algorithm presented in Fig. 2. The first part deals with the HSV segmentation and displays the details; the second part then illustrates the difference for the MeV segmentation.

#### 1. HSV segmentation

HSV segmentation is based on the conversion from RGB to HSV color space representation.^{36} The conversion between both systems is a straightforward base change. In this case, it is performed by an OpenCV function cvtColor.^{46} Displaying the hue results in the image representation in Fig. 9(a). The histogram of the hue coordinate yields the example plots [Figs. 9(b)–9(e)]. They are good representatives for the larger number of stacks; HD-v2 layers, due to their sensitivity, usually have a larger background peak in the hue and can then be segmented when this peak is identified. The EBT3 layers, on the other hand, have a higher sensitivity and can therefore separate between background and smaller, less prominent electron peaks in the histogram. They are normally cut as well due to peak prominence conditions on the stack itself. It has been found that this works well if for HD layers the first peak is cut and for EBT3 layers only the last major peak is kept.

This can be problematic if the layer itself is fully irradiated and burned/blackened. On the other hand, if a layer is purely black/saturated, then this layer might not be suitable for the evaluation anyway.

#### 2. MeV segmentation

The MeV segmentation converts the image array first into the corresponding energy values and follows the same approach as the HSV segmentation. The conversion from RGB to MeV follows the calibration given in Appendix C. The values are displayed in a histogram, and the peak prominences are used to find segmentation thresholds. As mentioned in Sec. IV, this might not work for HD-v2 layers with a low irradiation signal. This is due to the merger of peaks. If the irradiation is strong enough to allow a separation between background and other radiation (e.g., electrons, ions, gamma), several peaks can be seen inside the histogram. Otherwise, if the sensitivity is not high enough for a significant OD change, the threshold gets overestimated.

Applying it on EBT3 layers allows us to separate 3 peaks and therefore get an estimate for ions and the complementary electron and gamma signals.

### APPENDIX B: EMPIRICAL MODELS FOR THE ANGULAR DISTRIBUTION

The full reproduction of the ion beam makes it necessary to interpolate the extracted half envelope angles. This section displays and discusses the mentioned empirical models. The procedure for actual resampling of the spectrum is shown. The behavior of the different interpolation functions is displayed in Fig. 8. The conic beam resulting from the laminar flow is sketched in Fig. 10 to indicate the used angles and quantities.

#### 1. Empirical function for the angle dependency

The empirical functions are described here. Divergence data from the literature generally behave concave with the energy. A quadratic function is a good approximation for this as long as the function is only used for interpolation and not for extrapolation.

##### a. Quadratic function model

The simplest function to interpolate the experimental angle values is a quadratic polynomial,

Important for the models are the limits: In the lower limit, the function goes toward a plateau and stays there,

The higher limit does not have an end and does not necessarily converge to anything. Furthermore, an extrapolation to infinity can be performed on both limits. Since the spectra are limited in energy, this extrapolation is nonphysical and therefore not useful.

##### b. Partial function model

An improvement to the quadratic function is possible if the boundaries are taken into account. Laser accelerated ion-beams, accelerated via the TNSA mechanism, have a specific cut-off energy, and RCFs only allow to measure particles with an energy higher or equal to the Bragg peak energy of the first active layer. This means that the definition interval is limited. It therefore makes sense to force the functional values in the upper and lower interval limits to zero. An Ansatz for the angle dependency is

The parameter *b* is the maximal energy of the ion beam while *d* is the value for the lower energy cut-off. This scales the function from *d* to the cut-off energy. *c* and *a* scale the curvature of the function accordingly. For low energies, the distribution goes toward 0, indicating a finite beam size. The behavior for high energies goes to zero as well, limiting the beam’s maximum energy as well.

The model is valid on the interval *E* ∈ [0, *b*], where *b* corresponds to the maximal ion energy in the beam.

#### 2. Particle distribution function in a laminar flow

Assuming a laminar distribution following the conic assumptions in Sec. IV A, the following steps can be followed for the angular dependent resampling of the beam.

Step 1: Start from the virtual source with size 0.

Step 2: Sample

*φ*from 0 to 360°.Step 3: Sample

*r*from 0 to 1.Step 4: Rescale

*r*to $r$ to keep radius scaling homogeneous.Step 5: Project velocities using trigonometric relations.

Assumptions are that the particles are ballistic, not dynamic and starting from a virtual source of size 0. The trajectories then do not cut each other and resemble a laminar flow.

### APPENDIX C: RCF CALIBRATION

This section provides the data for the conversion of the experimental data evaluated. The setup is based on the irradiation of a stack using protons from a conventional cyclotron. A short description of the measurements for the calibration data can be found in the GSI report.^{47} A more detailed discussion is in preparation for publication.

The calibration function is given by

where Ξ indicates the color value, *D* is the registered dose, and *a*, *b*, and *c* are the fit parameters, which are displayed in Table IV. *b* is chosen to be 0.

. | Color channel . | a . | c . |
---|---|---|---|

HD-v2 | R | 0.544 ± 0.004 | 17.9 ± 0.8 |

G | 0.535 ± 0.005 | 72 ± 4 | |

B | 0.411 ± 0.006 | 160 ± 10 | |

EBT3 | R | 0.512 ± 0.012 | 3.6 ± 0.3 |

G | 0.60 ± 0.03 | 11.9 ± 1.2 | |

B | 0.38 ± 0.06 | 31 ± 8 |

. | Color channel . | a . | c . |
---|---|---|---|

HD-v2 | R | 0.544 ± 0.004 | 17.9 ± 0.8 |

G | 0.535 ± 0.005 | 72 ± 4 | |

B | 0.411 ± 0.006 | 160 ± 10 | |

EBT3 | R | 0.512 ± 0.012 | 3.6 ± 0.3 |

G | 0.60 ± 0.03 | 11.9 ± 1.2 | |

B | 0.38 ± 0.06 | 31 ± 8 |

The color values are determined by a calibrated scanner. Using the calibration allows us to use different scanners and ensure comparability of the different devices.

The result of the fit and its data is also presented in Fig. 11. Using the parameters of the used RCFs,^{28} corresponding scaling factors between dose and MeV/px are calculated. These factors are part of the implemented tool.^{18} It is also important to note that each of the RGB components is taken into account to maximize the dynamic range of the RCFs; details about this are to be published.

### APPENDIX D: USER BIAS FOR RCF EVALUATION

The user bias mentioned as motivation in the main part of the article was investigated as part of a bachelor’s thesis written at TEMF, TU Darmstadt in 2020 and 2021.^{48} The goal of this section is to give more information on the user bias study performed in the scope of the mentioned thesis.

#### 1. Setup and context

The study on the user bias was needed to evaluate the deviations in the signal segmentation performed by humans and estimate the algorithms quality.

Four different experts (person A–D) on RCFs participated in this study. Each of the experts received two different data sets (one shot each) to evaluate. Results of the experts’ segmentations are presented in Fig. 12.The differences in the segmentations decrease with the penetration depth; the user bias therefore gets less with later layers.

#### 2. Results and comparison

The thesis tested different methods for segmentation of the beam. The threshold segmentation was one of these.

Some example results of the thesis are displayed in Fig. 13. The results of two full RCF stacks employed at an experiment at the PHELIX laser at GSI, Darmstadt were evaluated. This means a total of 15 films were investigated. Displayed are only two representatives of the data sets: Fig. 13(a) shows the original scan from the image No 12, which is a later HD layer in one of the data sets used for this study. Some of the experts did not evaluate this layer because they did not identify a signal on the layer. If the data are taken into account as performed by users A, C, and the algorithm, enhancing the scaling, converting to hue, and applying the filter methods, allowed to determine an existing signal. These are displayed in Figs. 13(b)–13(d).

For a more visible contrast as presented in Fig. 13(e), the results do not deviate as much. It is visible that the segmentations by the algorithm and the users differ. These differences are indicated by the blue and red color coding in the whole figure.

One can therefore say that the algorithm and the different segmentation methods tested have comparable quality, and the results are similar to the experts results. Taking an algorithmic approach then allows us to reduce the fluctuation caused by individual people and therefore increase the comparability of the data.

## REFERENCES

^{22}–10

^{23}W/cm

^{2}regime

_{2}targets