Penumbral imaging is a technique used in plasma diagnostics in which a radiation source shines through one or more large apertures onto a detector. To interpret a penumbral image, one must reconstruct it to recover the original source. The inferred source always has some error due to noise in the image and uncertainty in the instrument geometry. Interpreting the inferred source thus requires quantification of that inference’s uncertainty. Markov chain Monte Carlo algorithms have been used to quantify uncertainty for similar problems but have never been used for the inference of the shape of an image. Because of this, there are no commonly accepted ways of visualizing uncertainty in two-dimensional data. This paper demonstrates the application of the Hamiltonian Monte Carlo algorithm to the reconstruction of penumbral images of fusion implosions and presents ways to visualize the uncertainty in the reconstructed source. This methodology enables more rigorous analysis of penumbral images than has been done in the past.

## I. INTRODUCTION

Penumbral imaging is a form of coded-aperture imaging that is commonly used in plasma diagnostics. A radiation source with an unknown size and shape, such as an inertially confined fusion (ICF) implosion, projects radiation through one or more circular apertures onto a detector, such as an image plate, which records an image of the source. In the field of ICF, penumbral imaging is used to image emission of protons,^{1} deuterons,^{2,3} neutrons,^{4} x rays,^{5,6} and gamma-rays.^{7}

Penumbral imaging is distinguished from pinhole imaging in that the apertures are larger than the source. This increases the quantity of radiation measured and thus improves the quality of the image but also makes the image more difficult to interpret. Unlike a pinhole image, it is not a simple picture of the source. Instead, it is a *penumbral image*—a convolution of the source with a point-spread function—that must be reconstructed. An example of a penumbral image is shown in Fig. 1. It looks like an image of the aperture, and information about the source is encoded in the circular edge. There are multiple ways to reconstruct a penumbral image, including Fourier transform based methods, such as Wiener deconvolution,^{5} computed tomography methods, such as filtered backprojection,^{1,6} and iterative maximum-likelihood methods, such as the Richardson–Lucy algorithm.^{8,9}

The recorded penumbral image will always have errors due to random noise in the image and uncertainty in the point-spread function, of which both propagate into the inferred source. There are ways to reduce the random error, but they all require tuning to discriminate noise from real features in the data. For example, Richardson–Lucy can be initialized with a uniform source and terminated after a finite number of iterations, but the user must decide which iteration to terminate at.^{3} With too much noise reduction, some features of the true source may be lost in the inferred source; with too little noise reduction, some features may appear in the inferred source that were not present in the true source. This is illustrated in Fig. 2.

Monte Carlo methods are often used to quantify the uncertainty in these reconstructions. Two such methods are bootstrapping^{10} and adding noise to the image.^{11} In both, a set of penumbral images is generated, all based on the data but with some variation, and each is reconstructed individually. Variations in the reconstructed sources quantify the error in the inference due to noise in the image and due to random variations in the aperture geometries. However, neither method accounts for systematic errors due to the reconstruction algorithm.

An alternative method is the application of Bayesian statistics, in which the true value of the parameter being inferred is treated as a random variable with a probability distribution. In the context of penumbral imaging, this means that rather than varying the image and seeing how it affects the inferred source, the image is kept constant and the source is varied in such a way that it remains consistent with the image. This framework allows random and systematic uncertainties to be quantified and assigned probability distributions.

In problems with many degrees of freedom such as image reconstruction, Bayesian statistics is usually implemented through Markov chain Monte Carlo methods, which allow a many-dimensional probability distribution to be sampled. Such methods have been used with “L rolled edge” penumbral imaging to quantify the size uncertainty of an x-ray backlighter.^{12} However, they are rarely used for imaging problems in general and have never been used to quantify uncertainty in the *shape* of a reconstructed source. One question that arises is how to visualize the resulting distribution of sources, as traditional error bars are not possible in two-dimensional data.

This paper demonstrates the use of Markov chain Monte Carlo methods to quantify uncertainty in the reconstruction of penumbral images of ICF implosions. Various ways of visualizing the uncertainty in the source are presented.

## II. UNCERTAINTY QUANTIFICATION

*s*

_{ij}} =

**s**given a set of observed image pixels {

*d*

_{kl}} =

**d**,

Here, *C*_{P} is a normalization constant set such that the probability distribution integrates to 1. For most analyses, it need not be calculated. All that is needed are methods to calculate the prior probability *p*(**s**) and the likelihood *p*(**d**|**s**) and to sample the resulting distribution *p*(**s**|**d**).

### A. Likelihood

*p*(

**d**|

**s**), the relationship between the image and source is expressed linearly as

**A**represents convolution with the point-spread function,

*d*

_{B}is the image background, and

**is the pixel noise. This does not account for the spatial resolution of the image plate.**

*ɛ*The background arises primarily from neutrons and gamma-rays from the implosion that interact with the detector. Since, unlike the x rays being imaged, they do not interact with the aperture array, their fluence on the detector is uniform, and their effect on the image can be represented by a single scalar.

*ɛ*

_{kl}are dominated by photon counting statistics, so a Poisson likelihood function would be appropriate. In practice, it is difficult to calculate the absolute number of photons detected in each pixel, so this is challenging to implement. Instead, the Poisson distribution is approximated by a Gaussian distribution whose standard deviation

*σ*

_{kl}scales with the square root of the pixel value. The magnitude of the noise is estimated from the standard deviation of pixel values in the flat center of the image, which is expressed mathematically as

### B. Prior

*d*

_{B}, an uninformative gamma distribution is chosen. This distribution forbids negative values and has a long tail that permits very large values,

*α*, as proposed by Fowler

*et al.*,

^{12}

All source pixel values are constrained to be non-negative, to ensure that the results are physically possible. All pixels along the edge of the source domain are also fixed to zero. This ensures that the source prior’s covariance matrix is nonsingular. These constraints can be imposed mathematically by setting the prior to zero anywhere they are violated, but, computationally, it is easier to impose them through the parameterization of the source (for example, by sampling the logarithms of the pixel values rather than the pixel values themselves).

*α*could be set manually, but this would result in the same unquantified systematic uncertainty that is present in existing reconstruction algorithms. Instead,

*α*is treated as an unknown variable with its own prior probability distribution. A ballpark estimate is obtained by estimating the mean value of

**s**from the intensity in the flat center of the image and noting that, by dimensional analysis,

*α*should scale inversely with it. A wide log-Gaussian distribution centered on that estimate is then chosen to permit a wide range of

*α*s over multiple orders of magnitude,

*n*

^{2}is the number of free pixels in the source.

In this scheme, the values of *α* that are too small are penalized because the solution space is so big and only a small part of it fits the data. The values of *α* that are too big are penalized because it cannot fit the data without incurring large gradient penalties. Thus, the posterior distribution will naturally cluster around the value of *α* that reduces the solution space as much as possible without excluding the solutions that fit the data.

Note that the constant *C*_{s} cannot be ignored because it varies with *α*, so its dependence on *α* must be quantified. Since *p*(**s**) is an *n*^{2}-dimensional Gaussian distribution where every element of the covariance matrix is inversely proportional to *α*, it can be shown that $Cs\u221d\alpha n2/2$. This factor is used in the calculation of *p*(**s**|*α*) in place of *C*_{s}.

*d*

_{B}and

*α*. The full joint posterior probability distribution is calculated as follows:

The point-spread function **A** can be parameterized in terms of the aperture radius and shape and thus also be treated as a random variable and assigned *a prior*. This would enable the quantification of systematic error due to the uncertainty in the point-spread function and is the subject of future work.

### C. Sampling

For the starting point of the Markov chain, the parameters *α* and *d*_{B} are set to the medians of their respective prior distributions. The free pixels of the source are initialized by drawing from a log-uniform distribution between $\u27e8d\u27e9n2e\u22121$ and $\u27e8d\u27e9n2e$. This ensures that the sum of the source pixels is approximately correct while providing no shape information.

The Markov chain is executed using PyMC,^{13} an open-source Bayesian statistics library for Python. PyMC implements the no-U-turn sampler,^{14} a self-tuning implementation of the Hamiltonian Monte Carlo algorithm.^{15} It automatically calculates and differentiates the unnormalized posterior probability density and samples from the posterior distribution.

Eight Markov chains are run in parallel for 3000 iterations each, consuming a total of 5.5 CPU-h. The first 2000 iterations are used to tune the step size and are discarded after the sampling is complete. That leaves a total of 1000 posterior samples in each Markov chain.

## III. UNCERTAINTY VISUALIZATION

Here, an example is shown of this methodology applied to hard x-ray data collected from the Omega Laser Facility.^{16} The source was a laser direct-drive ICF implosion. The target was a DT gas-filled spherical shell of deuterated plastic with a diameter of 870 *μ*m. It was offset 100 *μ*m from the target chamber center, roughly in the direction of the imager, to induce an asymmetry. The detector was an image plate filtered with 10 *μ*m of tantalum and 50 *μ*m of aluminum, which captured seven nominally identical projected images from apertures 300 *μ*m in diameter. These were aligned by fitting their centroids to the known shape of the aperture array, resampled to a common pixel grid, and summed. The result was analyzed as a single 259 × 259-pixel image. The resulting source has 97 × 97 free pixels, each 2 × 2 *μ*m^{2} in size.

The simplest way to interpret the results is to examine the individual sources in the Markov chains. Figure 3 shows nine such randomly chosen sources. This provides an intuitive sense of the significant features and plausible variations in the source. For instance, it can be noted that all nine sources exhibit a strong skewness, with the intensity peak up and to the right of the centroid of the 17% contour.

For most practical interpretations, it is preferable to examine a single representative source and the magnitude of its plausible error. For zero- or one-dimensional data, this can be done using error bars or credible bands. This becomes more difficult for two-dimensional images.

This can be addressed by using one-dimensional representations of the image, such as lineouts. Figure 4 shows a lineout through the normalized source along the line *x* = 0. Specifically, a pointwise equal-tailed 90% credible band is shown, which characterizes the range of plausible locations of the lineout. At each *x*-value, the distribution of possible *y*-values is calculated, and shading is applied between that distribution’s fifth percentile and 95th percentile. In other words, shading is applied to every point on the plot that is neither 95% likely to be below the true lineout, nor 95% likely to be above the true lineout. A lineout of a single representative sample from the Markov chains is also shown.

Another popular one-dimensional representation is a contour. In particular, the contour at 17% of the peak intensity is a standard way to characterize images of ICF implosions. A pointwise equal-tailed 90% credible band can be constructed for a contour in a manner analogous to the lineout. Shading is applied to every point that is neither 95% likely to be within the true contour nor 95% likely to be outside the true contour. The resulting shaded region characterizes the range of plausible locations of the contour. Figure 5 shows credible bands for the 17%-, 50%-, and 83%-contours of the normalized source.

A more direct way to visualize the posterior distribution is to extract important scalars and present their possible values as a histogram. Figure 6 shows the result of this applied to the source’s 17% contour’s mean radius and its relative mode-2 asymmetry, of which both are commonly used to quantitatively describe images of ICF implosions.

## IV. CONCLUSIONS

Penumbral imaging is a technique often used in plasma diagnostics in which a radiation source projects radiation through one or more apertures onto a detector, where the apertures are larger than the source.^{5} To interpret a penumbral image, one must first use a reconstruction algorithm to recover the original source. The inferred source always has some error due to noise in the image and uncertainty in the point-spread function. There are methods to reduce this random error, but they are challenging to tune and they introduce their own systematic errors. Interpreting the inferred source thus requires quantification of that inference’s uncertainty. However, existing uncertainty quantification methods do not account for systematic error from the algorithm.^{10,11}

Bayesian Markov chain Monte Carlo algorithms such as Hamiltonian Monte Carlo^{15} have been used to quantify the uncertainty in reconstructed penumbral images of x-ray backlighter sources,^{12} but have never been used for penumbral images of ICF implosions. In particular, previous applications of Markov chain Monte Carlo have only quantified uncertainty in source sizes, not in source shapes. As a result, there are no common means of visualizing uncertainty in image data.

This paper demonstrates the application of Hamiltonian Monte Carlo to the reconstruction of penumbral images of ICF implosions. Multiple methods to visualize the uncertainty in the reconstructed source are presented, including credible bands of lineouts and contours and histograms of important scalars. This methodology allows for more rigorous quantitative interpretation of all coded-aperture imaging.

## ACKNOWLEDGMENTS

This work was supported, in part, by the Laboratory for Laser Energetics under Contract No. 417532G/UR FAO GR510907, the U.S. Department of Energy NNSA MIT Center-of-Excellence under Contract No. DE-NA0003868, and the NNSA Laboratory Residency Graduate Fellowship under Contract No. DE-NA0003960.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### Author Contributions

**J. H. Kunimune**: Conceptualization (equal); Investigation (lead). **P. V. Heuer**: Investigation (supporting). **B. L. Reichelt**: Conceptualization (equal). **T. M. Johnson**: Resources (lead). **J. A. Frenje**: Supervision (lead).

## DATA AVAILABILITY

The data that support the findings of this study are available from the corresponding author upon reasonable request.