Many charged particle imaging measurements rely on the inverse Abel transform (or related methods) to reconstruct three-dimensional (3D) photoproduct distributions from a single two-dimensional (2D) projection image. This technique allows for both energy- and angle-resolved information to be recorded in a relatively inexpensive experimental setup, and its use is now widespread within the field of photochemical dynamics. There are restrictions, however, as cylindrical symmetry constraints on the overall form of the distribution mean that it can only be used with a limited range of laser polarization geometries. The more general problem of reconstructing arbitrary 3D distributions from a single 2D projection remains open. Here, we demonstrate how artificial neural networks can be used as a replacement for the inverse Abel transform and—more importantly—how they can be used to directly “reinflate” 2D projections into their original 3D distributions, even in cases where no cylindrical symmetry is present. This is subject to the simulation of appropriate training data based on known analytical expressions describing the general functional form of the overall anisotropy. Using both simulated and real experimental data, we show how our arbitrary image reinflation (AIR) neural network can be utilized for a range of different examples, potentially offering a simple and flexible alternative to more expensive and complicated 3D imaging techniques.

Over the past three decades, charged particle imaging has become a widely used experimental approach within the field of gas phase chemical dynamics.1 Recording photoions/electrons on a two-dimensional (2D) position sensitive detector following a photoionization, photodissociation, or molecular scattering process has the multiplexed advantage of obtaining a complete description of both the relevant velocity and angular distributions simultaneously. The early pioneering techniques of Chandler and Houston2 were quickly adopted and refined, most notably with the introduction of velocity-map imaging (VMI) by Eppink and Parker.3 Further advances, such as ion counting/centroiding methods,4,5 slice imaging strategies,6–9 tomographic reconstructions using multiple image projections,10–13 and the use of increasingly sophisticated detectors,14–18 have subsequently led to greatly improved insights in many areas of physical chemistry. Although some of these imaging variants allow the full three-dimensional (3D) particle distribution I(x, y, z)—or a sub-section of it—to be recorded directly, most measurements instead rely on imaging a single 2D geometric projection of the overall distribution, Py,z,

Py,z=Ix,y,zdx.
(1)

A schematic of this projection is illustrated in Fig. 1, along with a definition of the relevant coordinate system. While this approach is much simpler and quicker than many alternatives and requires only a relatively inexpensive detector, strict symmetry conditions impose limitations on its range of applications: Specifically, only distributions with an axis of cylindrical symmetry lying parallel to the detection/imaging plane can be fully recovered analytically from a single projection, i.e., when the 3D distribution has no explicit dependence on the angle, ϕ=tan1xz.

FIG. 1.

Schematic illustrating the projection of various 3D distributions onto a 2D plane and the physical origin of the Abel transform. (Upper panel) The center image shows half of an example 3D photofragment distribution I(x, y, z). In typical imaging experiments, the 2D projection of the full 3D distribution along the x axis P(y, z) is recorded (right image). For distributions which have cylindrical symmetry about the y axis [i.e., I(x, y, z) has no explicit ϕ-dependence], the central slice of the full distribution I(y, z) (left image) can be recovered from the projection via the inverse Abel transform. See the main text for additional details. (Lower panel) If I(x, y, z) has a clear ϕ-dependence (center image), then P(y, z) and I(y, z) (right and left images, respectively) are no longer related by the Abel integral. Therefore, neither I(y, z) or I(x, y, z) can be determined analytically or numerically from P(y, z) alone using conventional image processing techniques.

FIG. 1.

Schematic illustrating the projection of various 3D distributions onto a 2D plane and the physical origin of the Abel transform. (Upper panel) The center image shows half of an example 3D photofragment distribution I(x, y, z). In typical imaging experiments, the 2D projection of the full 3D distribution along the x axis P(y, z) is recorded (right image). For distributions which have cylindrical symmetry about the y axis [i.e., I(x, y, z) has no explicit ϕ-dependence], the central slice of the full distribution I(y, z) (left image) can be recovered from the projection via the inverse Abel transform. See the main text for additional details. (Lower panel) If I(x, y, z) has a clear ϕ-dependence (center image), then P(y, z) and I(y, z) (right and left images, respectively) are no longer related by the Abel integral. Therefore, neither I(y, z) or I(x, y, z) can be determined analytically or numerically from P(y, z) alone using conventional image processing techniques.

Close modal

In situations where this symmetry constraint is satisfied, however, I(y, z)—the slice of I(x, y, z) in the yz-plane—and P(y, z) are mathematically related by the Abel integral,

Py,z=2yIy,zrdrr2y2,
(2)

where r=y2+z2. Rearranging for I(y, z), the inverse Abel transform can be derived, yielding the central slice of the 3D distribution,

Iy,z=1πrP(y,z)ydyy2r2.
(3)

Due to the cylindrical symmetry of the problem about the y axis, this central slice contains a complete description of the original 3D distribution. Attempts to directly invert real (i.e., non-ideal) experimental images may, however, break down as the integral in Eq. (3) is ill-conditioned; the derivative in the expression leads to low-quality inversions for noisy data, and the singularity at y = r directs any statistical imperfections toward the y axis. This then manifests as pronounced noise running along the center line of the inverted image. Alternatively, the reconstruction problem can be solved by recording multiple different 2D projections of the 3D distribution (e.g., by systematically varying the polarization of the ionizing/dissociating laser with respect to the imaging plane of the detector). This allows the entire 3D distribution to be recovered using common tomographic reconstruction techniques10,11 or by using other methods such as Fourier moment analysis.19 Although these strategies will work for any arbitrary distribution and have no symmetry requirements for I(x, y, z), multiple different high-quality projection images are required to retrieve an acceptable reconstruction, increasing the time and complexity of data acquisition. Other experimental variants aim to resolve narrow transverse sections of the full 3D distribution. This can be done by stretching the incoming photoproduct distribution along the time-of-flight axis (the x axis in Fig. 1) and quickly gating the detector to image only the central portion of the full distribution. These “slice imaging” techniques6–9 are popular in the ion imaging community. The low mass of hydrogen ions and, in particular, photoelectrons makes data acquisition with these species much more challenging, however.20,21 Using an alternative strategy, advanced time-resolved detectors14–18 are capable of recording both the (y, z) pixel coordinate and arrival time t of incoming photoproducts. This time information can easily be related to the x spatial coordinate, enabling direct recording of the full 3D distribution. Such complex detector technology, however, is often expensive and so a reliable and stable inverse Abel transform for 2D projected data remains essential for simple image analysis in many experiments.

Early Abel inversion methods relied on finding algorithms that avoid the derivative in Eq. (3). One popular choice here is the Fourier–Hankel technique.22 As a direct inversion method, this approach is fast and produces acceptable results for good quality input data. The technique can, however, magnify noise in experimental images. It also relies on two separate transform steps, which, depending on the particular discrete Fourier/Hankel transform used, can produce artifacts when handling images with sharp features. More recently, Livingstone et al. have presented a fast and direct matrix Abel transform.23 Due to its speed, this strategy is ideal for efficiently processing large volume datasets containing many images (as obtained in time-resolved measurements, for example). This method is still, however, susceptible to center line noise.

Moving away from direct inversion, much work has been done to reconstruct the central slice of the 3D distribution using recursive reconstruction techniques.24–28 A notable early algorithm proposed by Vrakking25 relies on supposing some initial ansatz solution for the central slice of the distribution I0(y, z) and the corresponding projection P0(y, z). Iterative adjustment of the trial solution then finds the distribution that when projected, best matches the experimentally recorded image. Retaining the same iterative approach, the family of algorithms proposed by Dick in recent years26,27 uses a maximum entropy method to invert image data. This allows specifics about the nature of the statistical noise in the image (i.e., Poissonian statistics for poorly sampled data) to be handled directly within the inversion procedure. These iterative reconstruction methods can produce excellent results, even when the data are sparse.

Another common iterative inversion approach is known as “onion-peeling.”29–32 Starting from the outer edge of the image, the projected contribution of the outermost pixels to the remainder of the image can be calculated and removed, leaving only the yz-plane contribution. By repeating this procedure at decrementing radii, the projected distribution is “unpeeled,” yielding the central slice of the original 3D distribution. This can produce superior results to the Fourier–Hankel method but can be computationally expensive due to the need to simulate and project a 3D distribution at each image radius.

Due to its speed and simplicity, the BASEX33 (BAsis Set EXpansion) method proposed by Dribinski et al. remains among the most popular methods used today. BASEX considers the projected distribution as a sum of basis functions, which have an analytically known Abel inverse. Calculating the coefficients for this expansion is simply a linear algebra problem. Once solved, these coefficients can be used to reconstruct the original distribution from the Abel inverted basis functions. Since this method only relies on matrix multiplication, BASEX is a very fast inversion technique. It does require that a basis set be computed in advance, but this only needs to be performed once, after which this information can be stored for use at any future point. BASEX still directs any noise in the image toward the center line along the symmetry axis, however, which can be detrimental when trying to observe smooth angular distributions.

Many of the methods for image inversion considered so far typically operate in Cartesian coordinates. Although this directly reflects the readout of data from a CCD array, a more natural choice of reference frame is polar coordinates since VMI images usually consist of concentric annular structures. Furthermore, the transform to polar coordinates is necessary anyway to extract radial and angular distributions. A notable recent example of an onion-peeling algorithm using polar coordinates is the finite slice analysis (FinA) approach reported by Thompson et al.,34 which has the additional advantage of requiring only local cylindrical symmetry within individual quadrants of the image distribution. Developing ideas set down by the BASEX method, the pBASEX35 (polar BAsis Set EXpansion) approach reported by Garcia et al. uses 2D basis functions formulated in polar coordinates to reconstruct the center slice of the original 3D distribution from its projection. The use of a polar basis set has similarly been combined with the aforementioned onion-peeling method in the work of Roberts et al. to create their Polar Onion-Peeling (POP) algorithm.36 This approach maintains the same iterative subtraction strategy as the original variant but moves most of the computational expense to the basis writing step. The swap to a 2D basis with these approaches results in perfectly smooth angular distributions, with any noise in the raw data now being concentrated at a single spot in the image center. More recently, the DAVIS (Direct Algorithm for Velocity-map Imaging System) approach developed by Harrison et al. offers similar performance to pBASEX but without the use (or the associated computation time) of a large basis set.37 This brief overview is by no means exhaustive, but it serves to highlight that the problem of the Abel transform remains an active area of research within the field of chemical dynamics.38 

Artificial neural networks (ANNs) are an excellent alternative approach for tackling mathematical problems that lack a well-conditioned and numerically stable solution (such as Abel inversion). In fact, ANNs have already been successfully utilized for calculating the Abel inverse of one-dimensional plasma density distributions.39 In this report, we start by demonstrating how ANNs can also be used to determine the inverse Abel transform for typical VMI experimental images. Building on our previous recent work in this area exploring the use of ANNs for noise removal and enhancement of data sparse charged particle images,40 we train a modified autoencoder network by presenting it with many pairs of simulated data; one image being the Abel transform of the other. The training process iterates through these images, adjusting and optimizing the tunable parameters of the ANN to find the values that produce output data in best agreement with the expected “truth” output. With training complete, the ANN can be presented with a new unseen VMI image and create a prediction of its Abel inverse. We show how such a trained ANN can accurately invert both simulated and real experimental VMI data, producing proof-of-concept results in excellent agreement with the BASEX and pBASEX methods in benchmark tests. Moving beyond this, however, it is anticipated that ANNs will also be very well-suited for the more general direct reconstruction of 3D distributions where cylindrical symmetry is absent. Such scenarios are commonplace in the broad field of photochemical dynamics with notable examples being photofragment vector correlation measurements,41 covariance-map imaging analysis,42 and in experiments using shaped ionizing laser fields.43 For these cases, there is not necessarily a simple mathematical transform relating a single 2D projection to its original 3D distribution. Using an ANN may therefore be the only option available.

To this end, we have developed an approach that we refer to as Arbitrary Image Reinflation (AIR). This uses an ANN designed to predict the full 3D photoproduct distribution using just a single 2D projection image as an input—effectively “reinflating” the projection image to its original 3D state. Analogous to our initial Abel inversion demonstration, we train a modified autoencoder network with pairs of simulated training distributions; one being the full 3D distribution and the other being the 2D projection along one axis. Taking this projection image as an input, the goal of training is to minimize the difference between the “true” 3D distribution and the predicted output of the AIR network. Using simulated examples, we show how AIR can be used as a replacement for the inverse Abel transform under cylindrically symmetric conditions, and more importantly, how AIR can also reliably reconstruct 3D distributions that have an explicit ϕ-dependence. To the authors’ knowledge, AIR is currently unique in this regard. Machine learning has cemented itself as an invaluable part of the chemical physicists’ toolbox,40,44–47 and we anticipate that AIR will be an important addition, enhancing and simplifying data analysis for a wide range of charged particle imaging measurements.

We make use of an image-to-image ANN based on the U-Net autoencoder architecture.48 Originally, this was designed for image segmentation applications in microscopy but has since become more widely exploited due to its high performance with comparatively small training sets. In essence, it is a modified autoencoder that has multiple skip connections to pass data between the decoder and the encoder paths at different levels within the network structure. A short description is included here, and more expanded details may be found elsewhere.40,48 The network is composed of a contraction path (the encoder, which takes a simulated projection image as its input) and an expansion path (the decoder, which takes the output of the encoder and reconstructs it into the Abel inverse of the input). The contraction path consists of layers of convolutional operations with a given number of filters followed by a rectilinear function and a max-pooling operation. Similarly, the expansion path consists of transposed convolutions followed by a rectilinear function and a concatenation with the output of the corresponding layer in the contraction path.

Since the range of structural variation in images recorded during photodissociation or photoionization events is relatively small and easily quantified, training data suitable for supervised machine learning can therefore be simulated in a straightforward manner. This makes VMI problems extremely well-suited for processing with machine learning techniques as, more generally, this ability to readily simulate ANN training data is not always possible (e.g., when dealing with photographic images). Training datasets for our initial 2D projection to 2D slice demonstration were constructed by generating ten thousand 200 × 200 pixel images. Ultimately, the maximum possible resolution of these data is dictated by the computer memory available when the ANN is trained. The total size of the training data and the ANN must not exceed the total memory available. For a fixed memory size, it is possible to work with higher resolution images by generating a smaller training set, but this comes at the expense of less overall structural variation. Given limitations imposed by the memory currently available to us, the 200 × 200 pixel size was chosen here as a trade-off between having a sufficiently large and varied training dataset and an output image resolution that is suitable for a majority of common applications. Further discussion of managing this trade-off can be found in Secs. IV and V. Each image is composed of a random number (10) rings, each with a Gaussian width profile,

Ir,θ=i=1nAiexprriσi2Θiθ.
(4)

The radial distance r is measured from the image center with Ai, ri, σi, and Θi defining the amplitude, central radius, width and overall angular form of the ith ring, respectively (with values for each parameter randomly sampled from a uniform distribution). Here, Ai scales between 0 and 1, the ri limits span 0–90 pixels, and σi varies between 1 and 100 pixels. If the randomly generated values produce a distribution that spills significantly outside the 200 × 200 image region (using the criterion ri + 2σi > 100), the values are discarded and resampled. The angular anisotropy is dependent on the photon order of the photochemical process from which the image data are derived. For an N-photon unimolecular ionization/photodissociation using linearly, circularly, or unpolarized light, the resulting angular distribution will potentially contain contributions from all Legendre polynomials, , up to l = 2N,49 

formula
(5)

Here, bl (often written as βl when solely considering linear polarizations) are coefficients denoting the extent of angular anisotropy due to the lth Legendre polynomial. The range of random values for bl are dictated by the specific photophysics of the ionization/dissociation process. The b2 angular anisotropy parameter has natural limits of −1 and 2, corresponding to a pure sine- or cosine-squared distribution, respectively. For the higher order terms, putting precise analytical values on the limits is often more challenging. Instead, random numbers are sampled uniformly between −1 and 1, and a trial angular distribution is generated using Eq. (5). If this distribution is strictly positive (and, therefore, physical) the angular distribution is retained, otherwise the values are resampled. The angles θ = 0° and θ = 180° lie along the laser polarization direction for linearly polarized ionizing laser pulses or along the propagation direction for the case of circularly or unpolarized light (the y axis in Fig. 1). Due to the symmetry and periodic nature of the Legendre polynomials in Eq. (5), we need to only consider at most one-half of each training image. This removes redundancy within the network, helping to reduce memory requirements and the total training time. For cases involving solely linear polarizations and/or achiral molecules, these redundancies may be reduced even further by considering only one single quadrant of each image. This is possible as there is no contribution from the odd degree Legendre polynomials in such instances and, hence, fourfold symmetry in the I(y, z) and Py,z) distributions. To retain the option fit to a broader range of experimental scenarios, however, we always work with half of our initial image in the training set (i.e., 100 × 200 pixels). We then use a rapid matrix Abel transform to efficiently calculate the projection P(y, z) of our randomly generated distribution. This method reframes the Abel transform of a discrete image as a simple matrix product,

P=2AI.
(6)

Here, A is a square matrix whose elements are the discrete Jacobian of the image coordinate system. Calculation of A is very simple (full numerical details can be found elsewhere23) and needs only to be done once and stored for use when generating the training set. We then optimize our network structure using these data. 9000 of the generated image pairs are used directly for training, while the remaining 1000 are reserved as validation images so that we can assess the network performance using previously unseen information. The network is constructed using the Keras framework in TensorFlow and trained using the Adam optimizer50 with root mean-squared error (RMSE) loss. The training was performed over 2000 epochs on an NVIDIA RTX 6000. The fully trained network is typically completed in around 10 h and can then be saved locally to process images as and when required. Once trained, the ANN can invert a 200 × 200 pixel image in a few hundreds of milliseconds, making it comparable in speed to the likes of BASEX and pBASEX. For each of the demonstration cases considered in subsequent sections, we retrain a different ANN each time to account for the different photon order, tailoring the problem to a specific functional form of Θθ given by Eq. (5). This is an important point to stress, since any variable that can be reliably fixed/limited within the training set data will enhance overall network efficiency and performance. As the laser polarization geometry and overall photon order are known parameters in most experimental measurements, this is a reasonable level of constraint here.

Initial testing of ANN performance on simulated VMI data allows us to retain full control of the noise levels in our examples, providing insight into how image imperfections are handled during the inversion process. Our first simulated test case, shown in Fig. 2(a), is based on the “cFew” image used previously by Whitaker and co-workers in a detailed comparison of several different common inversion methods.22 The image is a simple example of a single photon ionization from a randomly orientated target with linearly polarized light, with a corresponding angular anisotropy of the form

formula
(7)

As outlined in Sec. III A, we train our Abel inversion ANN with images that have an angular anisotropy of this form using Eqs. (4) and (5). From the projected test distribution, 105 particle trajectories were randomly sampled to produce a simulated VMI image, which is shown in Fig. 2(b). After a fourfold symmetrizing operation, our test image contains on average ten particle counts per pixel. We find this to be a useful heuristic definition of “good” data quality, and the noise levels in all simulated examples considered subsequently will be treated in a similar manner. We make use of both BASEX and pBASEX for comparisons with our ANN. These specific methods were chosen since they represent both Cartesian and polar approaches to image inversion and, based on total citations, are the most popular methods currently in use within the photoion/electron imaging community. Throughout this report, the ANN is trained using perfectly smooth, noise-free simulated data. Thus, when an image containing realistic levels of stochastic variation is processed, we expect this to be reflected in the corresponding inversion. For the BASEX and pBASEX methods, this results in a center line or central spot of fluctuating noise in the images, respectively. For the image inverted with the ANN [Fig. 2(c)], however, there is no localized amplification of noise. Instead, the small level of statistical imperfection is essentially evenly distributed throughout the image. In our previous work in this area, we discussed the importance of incorporating realistic experimental noise into the ANN training data. This was for the specific purpose of denoising extremely data sparse images, where it was found that accurately quantifying noise in the input data was crucial to achieve reliable results.40 In the work presented here, however, we find that for reasonable quality data (10 counts per pixel on average, as defined earlier), this is not necessary, and robust high-quality inversions can be recovered without the need to statistically quantify image noise levels. This significantly reduces the complexity involved in the generation of training sets and extends the range of data to which a specific network can be applied. During testing, we found that reconstructions of acceptable quality (and similar to that obtained with BASEX) can even be retrieved for signal levels as low as 1 count per pixel on average without accounting for any noise directly in the training data. For data with average counts significantly below this limit, however, a more detailed treatment is required (see, for example, the maximum entropy reconstruction methods developed by Dick26,27)—although dealing with image processing in this type of extremely data sparse regime is beyond the scope of this work.

FIG. 2.

Top row shows (a) a simulated slice of a photoproduct distribution designed to resemble that used in other Abel inversion comparison studies22 and (b) the 2D projection of the corresponding 3D distribution with realistic noise added. Image (b) is used as the input for a trained ANN, BASEX, and pBASEX, in turn. On the bottom row, (c) shows the output from the ANN; the network prediction for the Abel inverse of (b). For comparison, the outputs of BASEX and pBASEX are presented in (d) and (e), respectively. See the main text for additional discussion.

FIG. 2.

Top row shows (a) a simulated slice of a photoproduct distribution designed to resemble that used in other Abel inversion comparison studies22 and (b) the 2D projection of the corresponding 3D distribution with realistic noise added. Image (b) is used as the input for a trained ANN, BASEX, and pBASEX, in turn. On the bottom row, (c) shows the output from the ANN; the network prediction for the Abel inverse of (b). For comparison, the outputs of BASEX and pBASEX are presented in (d) and (e), respectively. See the main text for additional discussion.

Close modal

For a thorough quantitative assessment of ANN performance, it is important to compare the angle-integrated radial distributions produced from the various inversion methods against the simulated distribution. These plots are shown in the top panel of Fig. 3. Furthermore, it is essential that the ANN can also correctly recover the angular anisotropy information for each peak in the radial distribution. Correctly extracting these parameters is a far more challenging test of any reconstruction routine and provides a much more robust test of the ANN strategy. For the ANN and BASEX images, this was done by binning each image into polar coordinates and fitting the resulting angular distributions with Legendre polynomials [see Eq. (7)]. The angular anisotropy parameters are calculated and returned automatically when using the pBASEX method. These values are shown in the lower panel of Fig. 3. Starting with the radial distributions, the ANN method is clearly in excellent agreement with both the original simulation and those extracted using BASEX and pBASEX. Similarly, the angular parameters retrieved from all methods match impressively with the original simulation. We also see the same high levels of performance across all our sample validation data. This starting example, therefore, provides a robust initial demonstration of how an ANN can be used for image inversion, establishing confidence in the basic machine learning strategy. In Sec. III C, the ANN will be tested with some real experimental data examples with performance once again benchmarked against both BASEX and pBASEX.

FIG. 3.

(Top panel) Angle-integrated radial (velocity) distributions obtained from the image data presented in Fig. 2. Results obtained using ANN, BASEX, and pBASEX are all in excellent agreement with the initial simulation. (Bottom panel) Angular anisotropy parameter, β2, extracted using the same three inversion methods. For each of the five peaks in the image data, we average the fitted values over a narrow range of radius values, as indicated by the black solid line, which also denotes the initial simulated anisotropy parameters. The ANN, BASEX, and pBASEX output points are displaced slightly along the x axis for clarity, but all correspond to the same range of pixel averaging.

FIG. 3.

(Top panel) Angle-integrated radial (velocity) distributions obtained from the image data presented in Fig. 2. Results obtained using ANN, BASEX, and pBASEX are all in excellent agreement with the initial simulation. (Bottom panel) Angular anisotropy parameter, β2, extracted using the same three inversion methods. For each of the five peaks in the image data, we average the fitted values over a narrow range of radius values, as indicated by the black solid line, which also denotes the initial simulated anisotropy parameters. The ANN, BASEX, and pBASEX output points are displaced slightly along the x axis for clarity, but all correspond to the same range of pixel averaging.

Close modal

For a representative demonstration on typical experimental data, we have chosen a photoelectron image recorded during a previous study from our group on the piperidine molecule.51 The photoelectron image [shown in Fig. 4(a)] results from 1 + 1′ ionization using a 200 nm pump and a 267 nm probe. To accurately process these real data, it is important to adjust the bounds and constraints on the randomly generated training images and optimize a new network. First, when compared to the example considered in Sec. III B, the additional photon now involved in the ionization process dictates that another angular anisotropy term needs to be introduced [see Eq. (5)], yielding an overall angular distribution of the following form:

formula
(8)

In addition, we must include other features common to real experimental data that are not always well described by the small number of randomly generated Gaussian rings used earlier in our simulated example. Real photoelectron VMI data can, for example, often exhibit a broad underlying background as well as a small spot at the image center that is significantly brighter than its surroundings. Ensuring that these details are adequately captured in the training data is important for effective network performance. Specifically, we now constrain two of the randomly generated rings in Eq. (4) to ensure that a narrow (σ ≤ 4 pixels) Gaussian bright spot is placed in the center of each training image as well a second, much broader (σ ≥ 30 pixels) Gaussian feature. This highlights the fact that the ANN approach is not fully “black box,” and some degree of photochemical insight is required during the training set design step. The image shown in Fig. 4(b) is from this newly trained ANN with the BASEX and pBASEX reconstructions shown in Figs. 4(c) and 4(d), respectively.

FIG. 4.

Panel (a) shows a raw experimental photoelectron image from piperidine using a (1 + 1′) fs-REMPI scheme with a 200 nm pump and a 267 nm probe.51 Panel (b) presents the corresponding output of an Abel inversion ANN with the output of BASEX and pBASEX also included in panels (c) and (d), respectively.

FIG. 4.

Panel (a) shows a raw experimental photoelectron image from piperidine using a (1 + 1′) fs-REMPI scheme with a 200 nm pump and a 267 nm probe.51 Panel (b) presents the corresponding output of an Abel inversion ANN with the output of BASEX and pBASEX also included in panels (c) and (d), respectively.

Close modal

As with the simulated example presented earlier, more detailed comparisons between different inversion methods can be made by examining the angle-integrated radial distribution and the photoelectron angular distributions. These are detailed in Fig. 5. The β2 and β4 anisotropy parameters [see Eq. (8)] are evaluated for three specific regions of the image, as was done in the original publication reporting these data.51 The ANN, BASEX, and pBASEX methods again produce results that are extremely consistent with only a small range of uncertainty. This second simple demonstration of the machine learning approach using real, rather than simulated data establishes further reliability on the path to the more general goal of direct 2D to 3D reconstruction and image “reinflation,” which will now be the focus for the remainder of this communication.

FIG. 5.

Top panel) Angle-integrated radial (velocity) distributions calculated from the ANN inverted image (blue), the BASEX image (red), and with the pBASEX method (green) are also included. Bottom panels show the calculated βl parameters averaged over three different pixel regions of the image. Error bars denote 1σ uncertainties. A more detailed discussion of these data can be found elsewhere.51 Note that in our previous work, a different Abel inversion method was used (rapid matrix inversion23), and again, excellent agreement was obtained. The energy bins used in the original publication of these data map directly to the pixel regions used in this figure.

FIG. 5.

Top panel) Angle-integrated radial (velocity) distributions calculated from the ANN inverted image (blue), the BASEX image (red), and with the pBASEX method (green) are also included. Bottom panels show the calculated βl parameters averaged over three different pixel regions of the image. Error bars denote 1σ uncertainties. A more detailed discussion of these data can be found elsewhere.51 Note that in our previous work, a different Abel inversion method was used (rapid matrix inversion23), and again, excellent agreement was obtained. The energy bins used in the original publication of these data map directly to the pixel regions used in this figure.

Close modal

Building on concepts introduced in Sec. III, the goal of the Arbitrary Image Reinflation (AIR) network is to directly predict the 3D distribution from a single 2D projection, bypassing the need for an inversion step. In practical terms, this means that the input to AIR is a square 2D array (set to 80 × 80 pixels here) and the output is now a 3D cube array (80 × 80 × 80 voxels). With this change in network architecture, the training data used in the network must also be correspondingly adjusted. The training set is built in the positive octant of a 3D Cartesian grid (representing one quadrant of the corresponding projection image) with the spherical coordinate variables defined as follows (see also Fig. 1):

r=x2+y2+z2,
(9a)
θ=cos1yr,
(9b)
ϕ=tan1xz.
(9c)

The choice to work in a single octant limits the computational expense involved in training the AIR network. For applications involving more complex polarization geometries, however, additional regions of the distribution may need to be considered. For example, to model forward–backward asymmetry effects due to photoelectron circular dichroism (PECD)52–57 and related phenomena,43,58,59 at least two octants of volume are required to capture the required 0°–180° range of θ. We shall consider such cases in future publications. In the most general form, 3D distributions are simulated using the following:

Ix,y,z=i=1nAiexprriσi2×Φiθ,ϕ.
(10)

With n ≤ 3 Gaussian features included in each training distribution in the first instance. This is simply an extension of Eq. (4) into three dimensions and includes a term to model the θ- and ϕ-dependence of each Gaussian ring in the distribution, Φiθ,ϕ. The restriction n ≤ 3 (as opposed to n ≤ 10 used in earlier examples) is introduced here as the memory requirements for this new, higher-dimension 3D training dataset are larger than the 2D examples considered previously. We, therefore, reduce this aspect of the structural complexity in our training data so that we may maintain a more extensive variation in other relevant parameters within the AIR architecture. The projection P(y, z) of this distribution is then simply the sum of Ix,y,z over the x-coordinate. For all such situations considered up to this point, we have omitted any explicit ϕ-dependence in the definition of the training set data. Setting Φiθ,ϕ=Θθ in Eq. (10) is equivalent to enforcing cylindrical symmetry. This can be done when “reinflating” images acquired with purely linear or circular polarizations, which have a symmetry axis lying in the imaging plane (see Fig. 1), greatly simplifying the generation of training data. It also makes the 2D image projection data Abel invertible, and so such a complex treatment is not strictly necessary. However, we include a demonstration of AIR reconstructing cylindrically symmetric image data so that we can again benchmark against popular Abel inversion strategies (here, we will consider pBASEX) as well as against a tomographic reconstruction approach. This is an important step as it serves to develop confidence in our general use of the AIR approach.

For cylindrically symmetric distributions, the angular contribution in 3D is the same as in the 2D case and is given as a sum of Legendre polynomials

formula
(11)

For a more general case of Φiθ,ϕ, however, the distribution becomes more complicated. Nevertheless, by remaining within the boundaries of typical VMI experiments, we can make some assumptions to constrain this problem when designing training sets. Symmetry dictates a simple generic expression for an arbitrary photoelectron angular distribution as an expansion in spherical harmonic functions of degree l and order m, ,49 

formula
(12)

where the (real) spherical harmonics are given by

formula
(13)

Here, is an associate Legendre polynomial. The bounds on l and m are dictated by the nature of the photochemical process under study, the simplest example being the case of a single photon ionization using linearly polarized light, where lmax = 2 and m = 0, reducing to the familiar form of Eq. (7) (where the Blm coefficients are relabeled as β by convention). Therefore, with some additional photochemical insight, one can refine the limits of l and m and randomize the corresponding Blm coefficients to construct an appropriate training set suitable to “reinflate,” in principle, any arbitrary charged particle image resulting from a photoionization/photodissociation event.

The remainder of the AIR network architecture is not dissimilar to the convolutional network based on U-Net described earlier in Sec. III. The encoder is composed of two layers, each consisting of two convolutional operations followed by a rectilinear function and a max-pooling operation. The number of convolutional filters is 64 and 128 for the first and second encoder layers, respectively. The window size of the max-pooling operation is 2 × 2. The decoder consists of one deconvolutional layer with a stride of size 2 × 2 and two convolutional layers, all followed by a rectilinear function. The number of convolutional filters is 128 and 64 for the first and second decoder layers, respectively. The output of the decoder is then passed through a last convolutional layer with 80 filters, followed by a final rectilinear function. The training was done on 10 000 simulated 3D distributions (9000 for the training and 1000 for validation) for 3000 epochs again using RMSE loss and the Adam optimizer. This took 15 h to train on an NVIDIA RTX 6000. After the training, the network can process a 160 × 160 pixel input image in around 35 ms.

For the initial demonstrations of AIR presented here, we examine three separate cases of resonant enhanced multiphoton ionization (REMPI) from a randomly orientated sample of molecules. First, we consider a simulated example of Abel invertible (i.e., cylindrically symmetric) data recorded in a pump–probe experiment (i.e., a two-photon REMPI process) and compare the results of our AIR network to those obtained using just the pBASEX method. Second, we demonstrate AIR reconstructing real Abel invertible experimental data, namely, photoelectron images of α-pinene recorded via a three photon REMPI scheme. In addition to benchmarking AIR against pBASEX, we now also use a series of additional images from α-pinene recorded over a range of linear polarization geometries to produce a tomographic reconstruction of the original 3D distribution. Finally, we use AIR to reconstruct a simulated projection image that originates from a distribution with no cylindrical symmetry. For this example, we will also use a tomographic reconstruction to draw comparisons with the AIR method. The use of simulated data for this initial demonstration is advantageous as in experimental examples where cylindrical symmetry is broken, the deviations from perfect cylindrical symmetry can be rather small.60 Therefore, here, we are able to create more complicated distributions where these symmetry breaking contributions are much larger and will provide a greater challenge to the AIR network.

Many experiments within the photochemical dynamics research community rely on using REMPI schemes involving two separate laser pulses. One such common scenario involves an initial laser pulse (the pump), which photoexcites the molecule under study. At some time later, the excited molecule (or associated photofragments) can be interrogated via ionization using a second probe pulse. If the polarization vectors of the pump and probe are parallel and lie in the plane of the imaging detector (the yz-plane in Fig. 1), cylindrical symmetry is retained in the resultant photoproduct distribution, and simple Abel inversion techniques may be used to process image data. For any other angle between the polarization vectors, however, cylindrical symmetry is broken. Such a situation is desirable in, for example, experiments measuring vector correlations in molecular photofragmentation, where the use of orthogonal pump and probe polarizations is often highly instructive.41 

For this first example, due to the two-photon nature of the simulation, the maximum Legendre polynomial degree may be capped at lmax = 4. Therefore, the resulting angular distributions can be described simply using an expansion of Legendre polynomials in cos(θ), as was shown earlier in Eq. (8), without the need to consider any ϕ-dependence. As such, we optimize this iteration of AIR with training data created using Eqs. (10) and (11). The Blm coefficients are again relabeled β to follow standard convention.

To investigate the effects of realistic experimental noise, we consider a simple test image constructed by again sampling 105 particle trajectories from one of the validation examples. This 2D image, along with its original 3D source distribution, is shown in Fig. 6. For reconstructing images of this type, there are many options available due to its inherent cylindrical symmetry. In addition, processing the image with AIR, we also use pBASEX—modified slightly to produce a 3D output, rather than the usual 2D image. This was done by calculating the basis expansion coefficients for the projected image using the conventional pBASEX routine and then using these coefficients to expand a set of 3D basis functions. This procedure does not require any further assumptions beyond cylindrical symmetry. These reconstructions are also shown in Fig. 6.

FIG. 6.

Example of the AIR network processing data. From an initial simulated 3D distribution (a), 105 particle trajectories are sampled and projected onto a detector plane, emulating a typical VMI experiment and producing the 2D projection image (b). Image (b) is then processed using a trained AIR network to give (c), a prediction of the original 3D distribution from the single 2D projection image. The distribution obtained using the pBASEX method is shown in panel (d).

FIG. 6.

Example of the AIR network processing data. From an initial simulated 3D distribution (a), 105 particle trajectories are sampled and projected onto a detector plane, emulating a typical VMI experiment and producing the 2D projection image (b). Image (b) is then processed using a trained AIR network to give (c), a prediction of the original 3D distribution from the single 2D projection image. The distribution obtained using the pBASEX method is shown in panel (d).

Close modal

Qualitatively, the network prediction matches extremely well with the original source distribution. We also note that again there is very little reconstruction noise present in the AIR output distribution, especially when compared with the pBASEX case. This is despite AIR only ever being trained with noise-free data, illustrating again that each of our ANN reconstruction methods appear robust when presented with moderate levels of experimental noise. The performance of AIR can be further highlighted by examining the angular anisotropy parameters (β2 and β4 in this specific case). Since the simulated “truth” distribution has full cylindrical symmetry, the prediction of AIR should also be fully cylindrically symmetric. Thus, the values of β2 and β4 should be the same when calculated in any plane containing the y axis, and each should agree with both the original simulation and with the pBASEX reconstruction. Figure 7 shows plots of the angular anisotropy parameters calculated for each of the rings in the example AIR distribution at several different ϕ values. In all cases, the relevant anisotropy parameter values calculated from the AIR prediction are in excellent agreement with those in the original simulation. Any small deviations (i.e., variation of β2 and β4 as a function of ϕ are a consequence of the realistic experimental noise built into our test image. This is confirmed upon examining completely noise-free validation data. Since the pBASEX method inherently assumes cylindrical symmetry, only one value of β2 and β4 can be evaluated at each radius. These were β2 = −0.50 ± 0.01 and β4 = 0.22 ± 0.02 for the outer ring and β2 = 0.21 ± 0.02 and β4 = −0.01 ± 0.02 for the inner ring. From the agreement with pBASEX and the invariance of β2 and β4 through each slice of the predicted distribution, we are confident that AIR has accurately made a direct transform from the projection image P(y, z) back to the original 3D distribution I(x, y, z) with the correct cylindrical symmetry.

FIG. 7.

Calculated β-parameters for the simulated and AIR predicted distributions shown in Fig. 6. The top two panels show the β-parameters for the outermost ring, while the bottom two panels show the β-parameters for the inner ring. The calculated angular parameters clearly do not significantly change as we fit to different slices of the distribution. Error bars denote 1σ uncertainties.

FIG. 7.

Calculated β-parameters for the simulated and AIR predicted distributions shown in Fig. 6. The top two panels show the β-parameters for the outermost ring, while the bottom two panels show the β-parameters for the inner ring. The calculated angular parameters clearly do not significantly change as we fit to different slices of the distribution. Error bars denote 1σ uncertainties.

Close modal

In this section, we examine the use of AIR on real experimental data by considering photoelectron images recorded via (2 + 1) REMPI of α-pinene using linearly polarized 400 nm radiation. This forms a part of an on-going study of photoelectron elliptical dichroism (PEELD)58,59 in chiral molecules. A full discussion of these measurements (which include non-Abel invertible photoelectron images recorded with various elliptical polarization geometries for different chiral species) is beyond the scope of this work but will be the subject of a future publication. For the case of three-photon ionization with linear polarization parallel to the yz-imaging plane, Eq. (11) now takes the following explicit form:

formula
(14)

Here, the measured projection image P(y, z) will be Abel invertible, and so again pBASEX may be used to reconstruct the full 3D distribution for comparative purposes. This projection image along with the pBASEX result is shown in Fig. 8. A series of projection images Pω(y, z) where the laser polarization is set at a variable angle ω to the detection plane (i.e., the polarization is rotated in the xy-plane of Fig. 1) were also recorded. A total of 46 different 2D image projections were obtained by stepping ω in 4° increments. This number of projections is representative of previously reported tomographic imaging experiments12 and provides a balance between reconstruction accuracy and data acquisition time. From these projections, a tomographic reconstruction algorithm can be used to calculate the original 3D distribution from which the projections originate. The reconstructed 3D distribution shown in Fig. 8 was generated using a filtered back projection (FBP) algorithm developed in-house, similar to the method detailed by Smeenk et al.11 

FIG. 8.

2D experimental imaging data from the three-photon 400 nm ionization of α-pinene using linear polarization (a), along with 3D reconstructions created using (b) AIR, (c) pBASEX, and (d) FBP tomography.

FIG. 8.

2D experimental imaging data from the three-photon 400 nm ionization of α-pinene using linear polarization (a), along with 3D reconstructions created using (b) AIR, (c) pBASEX, and (d) FBP tomography.

Close modal

Alternatively, AIR may be used in place of FBP tomography using just a single projected image. The AIR network is trained using data generated with Eqs. (10) and (15). As with our image-to-image Abel inversion ANN, the parameters of our training sets must be further adjusted to reflect any features present in our real data, namely, each of the photoelectron images contains a focused spot of signal in the center. This is modeled in our training by adding a randomized narrow and intense Gaussian feature in the center of each of our training images, as was done previously in Sec. III C. We again find that we do not need to consider any Poissonian statistical details associated with the particle counting nature of our imaging experiments. The image data here are of sufficiently high quality that this statistical noise is negligible.

With training complete, comparisons may be drawn between the AIR prediction of the original 3D distribution (see Fig. 8) and compare it with the pBASEX and FBP reconstructions. Of these, the FBP reconstruction appears to present with the most noise. This is unsurprising given that both pBASEX and AIR are designed to produce reconstructions with perfectly smooth angular distributions. Similar filtering options may be available for processing tomographic imaging data, but this exceeds the scope of our present work. More detailed comparisons can once again be made by examining the angle-integrated radial distributions and angular parameters extracted from each distribution, as summarized in Fig. 9. Although the radial distributions all seem to be in excellent agreement with each other, the β-parameters are less consistent. This issue is most notable in the FBP reconstruction, where the much higher levels of background noise make extracting anisotropy parameters more challenging, especially for weaker inner peak image feature. AIR, however, performs extremely well with an output similar to pBASEX. Despite this success, however, the true potential advantage of AIR lies in the ability to predict more complex distributions that break cylindrical symmetry using just a single 2D projection image.

FIG. 9.

Velocity distributions and β-parameters for the three-photon 400 nm ionization of α-pinene, obtained following application of the different reconstruction methods illustrated in Fig. 8. The β-parameters are averaged over the features centered around 70 pixels (outer peak) and 45 pixels (inner peak), respectively. The FBP tomography method has a large discrepancy when compared with the other reconstruction approaches. See the main text for details.

FIG. 9.

Velocity distributions and β-parameters for the three-photon 400 nm ionization of α-pinene, obtained following application of the different reconstruction methods illustrated in Fig. 8. The β-parameters are averaged over the features centered around 70 pixels (outer peak) and 45 pixels (inner peak), respectively. The FBP tomography method has a large discrepancy when compared with the other reconstruction approaches. See the main text for details.

Close modal

Finally, we now return to consider a two-photon ionization scenario, only this time involving orthogonal pump–probe polarization geometries with the polarization vectors lying along the x and y axes, respectively (as defined in Fig. 1). The corresponding photofragment angular distribution now takes the full generalized angular form for a two-photon process49 

formula
(15)

The m ≠ 0 terms are now included here due to an absence of cylindrical symmetry and are the reason that experiments measuring such distributions using conventional 2D VMI approaches cannot rely on the Abel transform for data processing. One alternative strategy for analyzing data of this form is to use the Fourier moment analysis technique.19 Although this technique is a powerful reconstruction method for analyzing distributions that do not possess cylindrical symmetry, pump and probe laser polarizations must be controlled independently to provide two different projection images. This, however, is not always possible in some experimental measurements (e.g., one-color multiphoton ionization using elliptical polarization).

The ill-posed nature of the problem described by Eq. (15) arises, in part, because different I(x, y, z) can project to give similar P(y, z) images. To initially simplify this issue and make it more easily solvable within an AIR network, we, therefore, begin by considering only one spherical shell in our simulated training data [i.e., in Eq. (10), n = 1]. To illustrate the performance of the trained AIR network, we consider once again a VMI image generated by sampling and projecting 105 trajectories from a simulated distribution. This image was passed to AIR, from which we retrieve a prediction of the original distribution. For comparison with the FBP tomographic reconstruction approach, a further 36 additional images were also simulated by rotating and projecting the initial 3D distribution in 5° increments and then sampling a further 105 trajectories. The simulated 3D distribution, its 2D projection, and the corresponding AIR and tomographic reconstructions are all shown in Fig. 10.

FIG. 10.

Simulated example of the AIR network processing data without any cylindrical symmetry. From an initial 3D distribution (a), 105 particle trajectories are sampled and projected onto a detector plane, emulating a typical VMI experiment and producing the projection image (b). Image (b) is then processed using a trained AIR network to give (c), a prediction of the original 3D distribution from the single 2D projection. A standard filtered back projection (FBP) tomographic reconstruction algorithm was used to generate distribution (d) from a total of 37 projections (where the distribution has been rotated in 5° increments between −90° and 90° before projection), each also consisting of 105 sampled particle trajectories. A ±7.5% signal threshold has been applied to the tomographic reconstruction to filter out some of the background noise introduced by the FBP algorithm.

FIG. 10.

Simulated example of the AIR network processing data without any cylindrical symmetry. From an initial 3D distribution (a), 105 particle trajectories are sampled and projected onto a detector plane, emulating a typical VMI experiment and producing the projection image (b). Image (b) is then processed using a trained AIR network to give (c), a prediction of the original 3D distribution from the single 2D projection. A standard filtered back projection (FBP) tomographic reconstruction algorithm was used to generate distribution (d) from a total of 37 projections (where the distribution has been rotated in 5° increments between −90° and 90° before projection), each also consisting of 105 sampled particle trajectories. A ±7.5% signal threshold has been applied to the tomographic reconstruction to filter out some of the background noise introduced by the FBP algorithm.

Close modal

Once again, the AIR prediction appears to perform extremely well, closely matching both the original simulated distribution and the tomography approach (although with significantly less noise). Although it is still possible to calculate a straightforward integrated radial distribution, our angular analysis requires a different approach to that used in earlier examples. We first evaluate the Bl0 coefficients exclusively through examination of the ϕ = 45° slice of the distribution [where the m ≠ 0 terms are all equal to zero due to the cos(2ϕ) dependence in Eqs. (13) and (15)] and expanding in Legendre polynomials. Once calculated, the contribution of these terms to the full distribution can be subtracted away, leaving behind only the m ≠ 0 terms, which are then extracted in a second fitting step by expanding the ϕ = 0° slice using the appropriate associate Legendre polynomials.

This procedure was performed on both the AIR and tomographic reconstructions, and the retrieved values are shown in Fig. 11. Both methods under consideration return values for the Blm coefficients that are in excellent agreement with the initial simulated values. To achieve a result of comparable quality to AIR, however, a total of 37 (5° step) projections were used in the tomographic reconstruction. Thus, AIR offers a potential order of magnitude speed improvement (or better) in data acquisition time without any significant compromise on accuracy. Extending the applicability of the AIR method to images with multiple ring features lacking cylindrical symmetry is also clearly desirable. This additional complexity, however, makes the ill-posed nature of this inversion problem even more challenging to overcome. Training a network using a random number of projected Gaussian shell features (like those in Secs. III and IV A) is too general for this problem and leads to poor reconstructions that are not always in sufficiently satisfactory agreement with validation data. By adding a further unknown layer to the reconstruction problem, memory requirements dictate a trade-off between the range and variance of the whole training set and the angular complexity of any individual simulated distribution. Instead, we now assume that the number of ring features is fixed when we generate our training data, giving our AIR network the greatest possible chance of extracting accurate information and reliable reconstructions. We again stress that our machine learning approach is not intended to be “black box.” As the complexity of the inversion problem increases, the number of a priori assumptions that are built directly into the trained network must also, naturally, get larger. Any constraints in the training set design that may be reliably or unambiguously imposed should actively be made use of. Therefore, as a final demonstration of AIR, we retrain our architecture to recognize and “reinflate” distributions that contain precisely two rings/shells and no cylindrical symmetry, using Eqs. (10) and (15), with n = 2. A simple test distribution is then considered, from which 105 particle trajectories were once again sampled and projected. These distributions, I(x, y, z) and P(y, z), are shown in Fig. 12, along with the AIR prediction from the newly trained network and a tomographic reconstruction (produced using the same number of projections and FBP algorithm as in the previous simulated example). We note again that AIR handles the noise in P(y, z) extremely well in stark contrast to the tomographic case. The Blm coefficients are evaluated for the distribution using the two-step strategy outlined earlier, and numerical values for each of the two ring features obtained via both AIR and tomography are shown in Fig. 13. Excellent agreement with the initial simulated conditions is once again obtained, demonstrating that if trained correctly, the AIR architecture presented here is capable of accurately “reinflating” 2D projections and accurately predicting their original 3D form even when they are not cylindrically symmetric.

FIG. 11.

Spherical harmonic coefficients Blm calculated from the distributions (a), (c), and (d) in Fig. 10. Values are averaged over the main ring feature in the distribution and normalized by the averaged value of B00 determined for each reconstructed distribution. Both AIR and the tomographic reconstruction produce distributions in excellent agreement with the initial simulation. Error bars denote 1σ uncertainties.

FIG. 11.

Spherical harmonic coefficients Blm calculated from the distributions (a), (c), and (d) in Fig. 10. Values are averaged over the main ring feature in the distribution and normalized by the averaged value of B00 determined for each reconstructed distribution. Both AIR and the tomographic reconstruction produce distributions in excellent agreement with the initial simulation. Error bars denote 1σ uncertainties.

Close modal
FIG. 12.

Simulated example of the AIR network processing more spectrally complicated data that do not contain any cylindrical symmetry. From an initial simulated 3D distribution (a), 105 particle trajectories are simulated and projected onto a detector plane, emulating a typical VMI experiment and producing the projection image (b). Image (b) is then processed using AIR to give (c), a prediction of the original 3D distribution from the single 2D projection image. The same FBP algorithm and noise filtering were used for the data presented in Fig. 10 that produced the image shown in (d).

FIG. 12.

Simulated example of the AIR network processing more spectrally complicated data that do not contain any cylindrical symmetry. From an initial simulated 3D distribution (a), 105 particle trajectories are simulated and projected onto a detector plane, emulating a typical VMI experiment and producing the projection image (b). Image (b) is then processed using AIR to give (c), a prediction of the original 3D distribution from the single 2D projection image. The same FBP algorithm and noise filtering were used for the data presented in Fig. 10 that produced the image shown in (d).

Close modal
FIG. 13.

The spherical harmonic coefficients Blm calculated from the distributions (a), (c), and (d) in Fig. 12. Values are an average across the width of each ring feature in the distribution (the outer ring in the top panel and the inner ring in the bottom panel) and normalized by the averaged value of B00 determined for each ring in each reconstructed distribution. Error bars denote 1σ uncertainties.

FIG. 13.

The spherical harmonic coefficients Blm calculated from the distributions (a), (c), and (d) in Fig. 12. Values are an average across the width of each ring feature in the distribution (the outer ring in the top panel and the inner ring in the bottom panel) and normalized by the averaged value of B00 determined for each ring in each reconstructed distribution. Error bars denote 1σ uncertainties.

Close modal

Image-to-image ANNs offer an alternative route for calculating the inverse Abel transform of 2D velocity-mapped charged particle distributions from unimolecular photoionization or photodissociation processes. We have demonstrated that this method performs similarly to other popular inversion algorithms in use today, providing general proof-of-concept and benchmark results. The real potential of this method, however, lies in its use as a tool for the direct reconstruction or “reinflation” of individual 2D image projections originating from particle distributions with no cylindrical symmetry. Our successful initial demonstration of this arbitrary image reinflation (AIR) approach provides a novel strategy that can be used in place of more complex time-sliced or tomographic experimental measurement techniques. We anticipate the use of neural networks and the AIR concept will be an appealing option for many experimentalists using charged particle imaging for the study of chemical dynamics. We also hope that this work will provide a starting platform for exploring additional machine learning developments within this community in the future. To facilitate this more readily, a copy of the AIR neural network architecture and related training materials are available on GitHub.61 

This work was supported by the Leverhulme Trust (Research Project Grant No. RPG-2012-735), Carnegie Trust Research Incentive Grant (No. 70264), EPSRC Platform Grant (No. EP/P001459/1), and EPSRC Quantum Imaging Grant (No. EP/T00097X/1). We also acknowledge Heriot-Watt University for providing C.S. and A.R. with Ph.D. funding. The authors also thank Jason Lee (DESY) for helpful discussions.

The authors have no conflicts to disclose.

C.S. and A.R. contributed equally to this work.

The data that support the findings of this study are openly available in GitHub at https://github.com/HWQuantum/AIR. These data are also available via http://doi.org/10.17861/1b0da270-4812-476b-9226-43e6467792c6.

1.
D. W.
Chandler
,
P. L.
Houston
, and
D. H.
Parker
,
J. Chem. Phys.
147
,
013601
(
2017
).
2.
D. W.
Chandler
and
P. L.
Houston
,
J. Chem. Phys.
87
,
1445
(
1987
).
3.
A. T. J. B.
Eppink
and
D. H.
Parker
,
Rev. Sci. Instrum.
68
,
3477
(
1997
).
4.
B.
Chang
,
R. C.
Hoetzlein
,
J. A.
Mueller
,
J. D.
Geiser
, and
P. L.
Houston
,
Rev. Sci. Instrum.
69
,
1665
(
1998
).
5.
W.
Li
,
S. D.
Chambreau
,
S. A.
Lahankar
, and
A. G.
Suits
,
Rev. Sci. Instrum.
76
,
063106
(
2005
).
6.
D.
Townsend
,
M. P.
Minitti
, and
A. G.
Suits
,
Rev. Sci. Instrum.
74
,
2530
(
2003
).
7.
J. J.
Lin
,
J.
Zhou
,
W.
Shiu
, and
K.
Liu
,
Rev. Sci. Instrum.
74
,
2495
(
2003
).
8.
C. R.
Gebhardt
,
T. P.
Rakitzis
,
P. C.
Samartzis
,
V.
Ladopoulos
, and
T. N.
Kitsopoulos
,
Rev. Sci. Instrum.
72
,
3848
(
2001
).
9.
K.
Mizuse
,
R.
Fujimoto
, and
Y.
Ohshima
,
Rev. Sci. Instrum.
90
,
103107
(
2019
).
10.
M.
Wollenhaupt
,
C.
Lux
,
M.
Krug
, and
T.
Baumert
,
ChemPhysChem
14
,
1341
(
2013
).
11.
C.
Smeenk
,
L.
Arissian
,
A.
Staudte
,
D. M.
Villeneuve
, and
P. B.
Corkum
,
J. Phys. B: At., Mol. Opt. Phys.
42
,
185402
(
2009
).
12.
M.
Wollenhaupt
,
M.
Krug
,
J.
Köhler
,
T.
Bayer
,
C.
Sarpe-Tudoran
, and
T.
Baumert
,
Appl. Phys. B
95
,
647
(
2009
).
13.
M.
Eklund
,
H.
Hultgren
,
I.
Kiyan
,
H.
Helm
, and
D.
Hanstrop
,
Phys. Rev. A
102
,
023114
(
2020
).
14.
A. T.
Clark
,
J. P.
Crooks
,
I.
Sedgwick
,
R.
Turchetta
,
J. W. L.
Lee
,
J. J.
John
,
E. S.
Wilman
,
L.
Hill
,
E.
Halford
,
C. S.
Slater
,
B.
Winter
,
W. H.
Yuen
,
S. H.
Gardiner
,
M. L.
Lipciuc
,
M.
Brouard
,
A.
Nomerotski
, and
C.
Vallance
,
J. Phys. Chem. A
116
,
10897
(
2012
).
15.
A.
Zhao
,
M.
van Beuzekom
,
B.
Bouwens
,
D.
Byelov
,
I.
Chakaberia
,
C.
Cheng
,
E.
Maddox
,
A.
Nomerotski
,
P.
Svihra
,
J.
Visser
,
V.
Vrba
, and
T.
Weinacht
,
Rev. Sci. Instrum.
88
,
113104
(
2017
).
16.
R. E.
Continetti
,
Annu. Rev. Phys. Chem.
52
,
165
(
2001
).
17.
S. K.
Lee
,
F.
Cudry
,
Y. F.
Lin
,
S.
Lingenfelter
,
A. H.
Winney
,
L.
Fan
, and
W.
Li
,
Rev. Sci. Instrum.
85
,
123303
(
2014
).
18.
H.
Buhr
,
M. B.
Mendes
,
O.
Novotný
,
D.
Schwalm
,
M. H.
Berg
,
D.
Bing
,
O.
Heber
,
C.
Krantz
,
D. A.
Orlov
,
M. L.
Rappaport
,
T.
Sorg
,
J.
Stützel
,
J.
Varju
,
A.
Wolf
, and
D.
Zajfman
,
Phys. Rev. A
81
,
062702
(
2010
).
19.
M. J.
Bass
,
M.
Brouard
,
A. P.
Clark
, and
C.
Vallance
,
J. Chem. Phys.
117
,
8723
(
2002
).
20.
M.
Ryazanov
and
H.
Reisler
,
J. Chem. Phys.
138
,
144201
(
2013
).
21.
S. K.
Lee
,
Y. F.
Lin
,
S.
Lingenfelter
,
L.
Fan
,
A. H.
Winney
, and
W.
Li
,
J. Chem. Phys.
141
,
221101
(
2014
).
22.
B. J.
Whitaker
,
Imaging in Molecular Dynamics
(
Cambridge University Press
,
Cambridge
,
2003
).
23.
R. A.
Livingstone
,
J. O. F.
Thompson
,
M.
Iljina
,
R. J.
Donaldson
,
B. J.
Sussman
,
M. J.
Paterson
, and
D.
Townsend
,
J. Chem. Phys.
137
,
184304
(
2012
).
24.
E. W.
Hansen
and
P.-L.
Law
,
J. Opt. Soc. Am. A
2
,
510
(
1985
).
25.
M. J. J.
Vrakking
,
Rev. Sci. Instrum.
72
,
4084
(
2001
).
26.
B.
Dick
,
Phys. Chem. Chem. Phys.
16
,
570
(
2013
).
27.
B.
Dick
,
Phys. Chem. Chem. Phys.
21
,
19499
(
2019
).
28.
F.
Renth
,
J.
Riedel
, and
F.
Temps
,
Rev. Sci. Instrum.
77
,
033103
(
2006
).
29.
K.
Zhao
,
T.
Colvin
,
W. T.
Hill
, and
G.
Zhang
,
Rev. Sci. Instrum.
73
,
3044
(
2002
).
30.
C.
Bordas
,
F.
Paulig
,
H.
Helm
, and
D. L.
Huestis
,
Rev. Sci. Instrum.
67
,
2257
(
1996
).
31.
S.
Manzhos
and
H.-P.
Loock
,
Comput. Phys. Commun.
154
,
76
(
2002
).
32.
J. O. F.
Thompson
,
C.
Amarasinghe
,
C. D.
Foley
,
N.
Rombes
,
Z.
Gao
,
S. N.
Vogels
,
S. Y. T.
van de Meerakker
, and
A. G.
Suits
,
J. Chem. Phys.
147
,
074201
(
2017
).
33.
V.
Dribinski
,
A.
Ossadtchi
,
V. A.
Mandelshtam
, and
H.
Reisler
,
Rev. Sci. Instrum.
73
,
2634
(
2002
).
34.
J. O. F.
Thompson
,
C.
Amarasinghe
,
C. D.
Foley
, and
A. G.
Suits
,
J. Chem. Phys.
147
,
013913
(
2017
).
35.
G. A.
Garcia
,
L.
Nahon
, and
I.
Powis
,
Rev. Sci. Instrum.
75
,
4989
(
2004
).
36.
G. M.
Roberts
,
J. L.
Nixon
,
J.
Lecointre
,
E.
Wrede
, and
J. R. R.
Verlet
,
Rev. Sci. Instrum.
80
,
053104
(
2009
).
37.
G. R.
Harrison
,
J. C.
Vaughan
,
B.
Hidle
, and
G. M.
Laurent
,
J. Chem. Phys.
149
,
129901
(
2018
).
38.
D. D.
Hickstein
,
S. T.
Gibson
,
R.
Yurchak
,
D. D.
Das
, and
M.
Ryazanov
,
Rev. Sci. Instrum.
90
,
065115
(
2019
).
39.
X. F.
Ma
and
T.
Takeda
,
Nucl. Instrum. Methods Phys. Res.
492
,
178
(
2002
).
40.
C.
Sparling
,
A.
Ruget
,
N.
Kotsina
,
J.
Leach
, and
D.
Townsend
,
ChemPhysChem
22
,
76
(
2021
).
41.
A. G.
Suits
and
O. S.
Vasyutinskii
,
Chem. Rev.
108
,
3706
(
2008
).
42.
J. W. L.
Lee
,
H.
Köckert
,
D.
Heathcote
,
D.
Popat
,
R. T.
Chapman
,
G.
Karras
,
P.
Majchrzak
,
E.
Springate
, and
C.
Vallance
,
Commun. Chem.
3
,
72
(
2020
).
43.
S.
Rozen
,
A.
Comby
,
E.
Bloch
,
S.
Beauvarlet
,
D.
Descamps
,
B.
Fabre
,
S.
Petit
,
V.
Blanchet
,
B.
Pons
,
N.
Dudovich
, and
Y.
Mairesse
,
Phys. Rev. X
9
,
031004
(
2019
).
44.
C. D.
Rankine
,
M. M. M.
Madkhali
, and
T. J.
Penfold
,
J. Phys. Chem. A
124
,
4263
(
2020
).
45.
M. M. M.
Madkhali
,
C. D.
Rankine
, and
T. J.
Penfold
,
Phys. Chem. Chem. Phys.
23
,
9259
(
2021
).
46.
K. T.
Schütt
,
M.
Gastegger
,
A.
Tkatchenko
,
K. R.
Müller
, and
R. J.
Maurer
,
Nat. Commun.
10
,
5024
(
2019
).
47.
J.
Westermayr
,
M.
Gastegger
,
K. T.
Schütt
, and
R. J.
Maurer
,
J. Chem. Phys.
154
,
230903
(
2021
).
48.
O.
Ronneberger
,
P.
Fischer
, and
T.
Brox
, in
International Conference on Medical Image Computing and Computer-Assisted Intervention
(
Springer International Publishing
,
Cham
,
2015
).
49.
K. L.
Reid
,
Annu. Rev. Phys. Chem.
54
,
397
(
2003
).
50.
D. P.
Kingma
and
J.
Ba
, in
3rd International Conference for Learning Representations
,
San Diego
,
2015
.
51.
L. B.
Klein
,
J. O. F.
Thompson
,
S. W.
Crane
,
L.
Saalbach
,
T. I.
Sølling
,
M. J.
Paterson
, and
D.
Townsend
,
Phys. Chem. Chem. Phys.
18
,
25070
(
2016
).
52.
G. A.
Garcia
,
L.
Nahon
,
M.
Lebech
,
J.-C.
Houver
,
D.
Dowek
, and
I.
Powis
,
J. Chem. Phys.
119
,
8781
(
2003
).
53.
I.
Powis
,
C. J.
Harding
,
G. A.
Garcia
, and
L.
Nahon
,
ChemPhysChem
9
,
475
(
2008
).
54.
M. H. M.
Janssen
and
I.
Powis
,
Phys. Chem. Chem. Phys.
16
,
856
(
2013
).
55.
C. S.
Lehmann
,
N. B.
Ram
,
I.
Powis
, and
M. H. M.
Janssen
,
J. Chem. Phys.
139
,
234307
(
2013
).
56.
C.
Lux
,
M.
Wollenhaupt
,
T.
Bolze
,
Q.
Liang
,
J.
Köhler
,
C.
Sarpe
, and
T.
Baumert
,
Angew. Chem., Int. Ed.
51
,
5001
(
2012
).
57.
B.
Ritchie
,
Phys. Rev. A
13
,
1411
(
1976
).
58.
J.
Miles
,
D.
Fernandes
,
A.
Young
,
C. M. M.
Bond
,
S. W.
Crane
,
O.
Ghafur
,
D.
Townsend
,
J.
, and
J. B.
Greenwood
,
Anal. Chim. Acta
984
,
134
(
2017
).
59.
A.
Comby
,
E.
Bloch
,
C. M. M.
Bond
,
D.
Descamps
,
J.
Miles
,
S.
Petit
,
S.
Rozen
,
J. B.
Greenwood
,
V.
Blanchet
, and
Y.
Mairesse
,
Nat. Commun.
9
,
5212
(
2018
).
60.
A.
Comby
,
S.
Beaulieu
,
M.
Boggio-Pasqua
,
D.
Descamps
,
F.
Légaré
,
L.
Nahon
,
S.
Petit
,
B.
Pons
,
B.
Fabre
,
Y.
Mairesse
, and
V.
Blanchet
,
J. Phys. Chem. Lett.
7
,
4514
(
2016
).
61.
C.
Sparling
,
A.
Ruget
,
J.
Leach
, and
D.
Townsend
(
2021
). “
Arbitrary image reinflation: A deep learning technique for recovering 3D photoproduct distributions from a single 2D projection
,” GitHub repository. https://github.com/HWQuantum/AIR, also see .