We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

## I. INTRODUCTION

The High Flux Isotope Reactor (HFIR) CG-1D cold neutron imaging beamline serves a broad science portfolio such as energy research,^{1} nuclear materials research,^{2} physics,^{3} plant physiology,^{4} engineering,^{5} and archeology.^{6} Neutrons are capable of detecting light elements such as hydrogen and lithium due to their ability to penetrate high-Z materials. Thus, neutron computed tomography (CT) offers a complement to imaging techniques such as X-ray CT. Multimodal methods (e.g., Refs. 7–9) have been developed for samples with both high- and low-Z materials; however, our current focus lies in the high-Z case, and so we rely on neutron CT. However, in neutron based tomography, an intrinsic problem is the relatively amounts of neutrons that can be produced as opposed to x-rays and electrons probes.^{10} This results in, for a fixed time, either measurements with relatively few events or measurements requiring longer acquisition times. Our interest lies in the latter scenario.

Parallel to the advancements in the field of neutron computed tomography there have been significant strides in the mathematical reconstruction of tomographic imaging. One of the workhorse algorithms of (neutron) tomography is the filtered back-projection (FBP) method.^{10,11} In recent years, there has been a revolution in the methods used in tomography as a result of the advent of *ℓ*^{1}-regularization based optimization methods.^{12–15} It is now possible to recover exactly, using these methods, the benchmark Shepp-Logan phantom problem.^{15} The advances in image reconstruction algorithms for computed tomography^{16} have made great strides using iterative reconstruction methods, especially in variations of the Algebraic Reconstruction Technique; see Refs. 17–21 and references therein. A focus around the development of these techniques has been to reduce the number of projections needed for reconstruction.^{22} The current study explores the use of total variational approaches for neutron tomography, specifically focusing on improving the computational time needed for reconstructions in scenarios where only low sampling measurement rates hold.

The general form of the inverse problem in neutron tomography is given as finding the attenuation coefficient, *μ*, of an object such that

where *δ* is the Dirac delta function, the variable *σ* is the noise in the measurement, *I*_{0} and *I* are the incoming and final neutron intensity measured as a function of position t, and the angle of rotation [with respect to the (*x*, *y*) Cartesian coordinates] is *θ*. A schematic of the problem is in shown in Fig. 1. The accuracy in determining *μ* depends critically upon the number of angles and positions that can be measured and the signal-to-noise ratio of the measurements. Our goal is to develop an algorithm which can perform the reconstruction—despite under-resolved angular data—by incorporating recent methods in *ℓ*^{1} image reconstruction.

In practice, given data *I*/*I*_{0}, the inverse problem is written as solving the following minimization problem:

where Φ represents a regularizing term which promotes certain desired structures in the reconstructed attenuation object *μ*. Here *α* > 0 is a parameter which balances between the reconstruction’s fidelity to measured data and its regularity. An equivalent way of writing this problem is as follows:

where the *L*^{2} norm is for functions of the variables *θ* and *t* and *R*_{θ} is the Radon transform associated with the angle *θ*. It has been proven recently that, for piecewise constant images, the best choice for Φ in Eq. (2) is total variation (TV) based regularization.^{15} In a discrete setting (such as practical imaging applications), this means that Φ(*μ*) takes the form

In contrast to Refs. 23 and 24, we are here using a standard *L*^{2} fidelity penalty term, meaning we are not assuming a Poisson model for the noise in the detector. As noted above, a challenge in neutron imaging is the trade-off between increased acquisition time for a projection against the relatively low number of counts the detector experiences. Our application is focused on problems where we allow for increased acquisition times per angular measurement such that a Gaussian model is a reasonable simplification; with a reduction in the number of needed projections for a reconstruction, this increase does not contribute significantly to the overall time required for measurements. Regarding the second functional Φ in the objective, other penalty terms, such as polynomial annhilation-based terms, have been developed for imaging with more complicated structures.^{25–27} Our present focus is on neutron imaging where a piecewise constant *μ* is of primary interest. However, polynomial annihilation (PA)-based methods can be easily included using the methods described in the present work.

In recent years, *ℓ*^{1} regularization has received considerable attention in the design of image reconstruction algorithms from under-sampled and noisy data for images that have some measurable features with a sparse representation; this is compatible in applications where a relatively small number of measurements are taken.^{13,15,28} It is in general still difficult to develop efficient and robust techniques for solving such image reconstruction problems.

The split Bregman algorithm^{29} is a numerically efficient and stable algorithm that has successfully solved the *l*^{1} regularized reconstruction problem for a variety of applications. In this paper, we extend the split Bregman algorithm as a launching point to develop a new technique for solving the neutron tomography inverse problem using Eq. (4) as a penalty term. We will demonstrate that our method yields improved accuracy in regions away from discontinuities, especially in the case of under-sampled data. We will adopt the standardizations and terminology from Ref. 29 to describe our algorithm. Recent work on *ℓ*^{1} regularization for x-ray based computed tomography has relied on a Douglas-Rachford splitting algorithm.^{30} While there is an equivalence under suitable reformulations between exact split Bregman and Douglas-Rachford,^{31} the error-forgetting nature of split Bregman^{32} gives, in our experience, good convergence behavior despite using very inexact updates which are relatively inexpensive to compute. Previous work on using sparse-angle neutron-based CT has used split Bregman,^{21,22} however, mainly in order to solve exact updates within a hybrid reconstruction technique.

A well-known drawback in using the TV *ℓ*^{1} regularization term is that the reconstructed image defaults to a piecewise constant approximation. While suitable for some applications, in others it is desirable to see additional details. For example, total generalized variation (TGV), developed in Ref. 33, and multi-wavelets in Ref. 34, allow for finer details in smooth regions of the reconstruction. The already-mentioned polynomial annihilation (PA) transform was originally designed in Ref. 35 and used in Ref. 36 to demonstrate that encouraging a *sparsity of edges* in the underlying image yields improved accuracy for both image reconstruction and edge identification. However, in the present application, we have anecdotally seen that TV provides the best reconstructions.

## II. SPLIT BREGMAN ALGORITHM

In order to solve (3) with regularizer ||∇*μ*||_{1} efficiently in multiple dimensions, we will use the split Bregman algorithm, which we present below. The split Bregman algorithm, developed in Ref. 29, was shown to be equivalent to the Bregman iteration. Its popularity is due to its speed and that its nonlinear steps involve only soft thresholding; meanwhile all other aspects of the algorithm involve solving invertible linear systems. In recent years, the split Bregman algorithm has been used for solving a broad class of *l*^{1} regularized optimization problems. In particular, the split Bregman algorithm has been successfully and efficiently used in MRI reconstructions in which context (3) was called the “sparse MRI data reconstruction problem.”^{37–39} We now describe how to efficiently solve (3) with the TV penalty term via an extension of the split Bregman algorithm for *TV* denoising as described in Ref. 29.

We begin by letting ∇_{x} and ∇_{y} denote the respective directional differential. Additionally, throughout, for a linear operator *A*, its adjoint is denoted by *A*^{*}. We denote by *M* the linear operator which restricts the Radon transform of *μ* to angles along which measurements have been taken to generate **I**/**I**_{0}. The split Bregman algorithm becomes a sequence of minimization problems of the form

followed by the updates

Following the structure of Ref. 29, the above can be rewritten as

We note that the update rule for *μ*^{k+1} is simply solving the optimality system for the first *L*^{2} optimization in (5). Additionally, one may modify this as in Ref. 29 so that several updates on $\mu k+1,dxk+1,dyk+1,bxk+1,byk+1$ can be performed in an inner loop before updating $I^k+1.$ In practice, the most expensive portion of the algorithm is inverting $(\alpha R\theta *M*MR\theta \u2212\lambda (\u2207x)T\u2207x\u2212\lambda (\u2207y)T\u2207y)$. Unlike in Ref. 29, there do not appear any reasonable methods for determining a closed form solution to this linear system. However, the error-forgetting properties of the Bregman iteration discussed there and in Ref. 32 mean that the inversion in the update rule for *μ*^{k+1} does not need to be computed exactly in order for the iterations to converge. That is, we typically only need an approximation for the update in order for the split Bregman algorithm to converge to a solution. In light of this property, instead of fully solving

we instead only compute an approximate solution via the conjugate gradient (CG) algorithm. This approximation can be rather inexact without affecting the overall convergence of the algorithm.^{32} Therefore, we replace the update rule with

where $CG$ (*A*, *b*, *x*_{0}, *m*) denotes the result of the conjugate gradient (CG) algorithm in solving *Ax* = *b* with initial guess *x*_{0} and a maximum number of iterations *M*.

This means that, in total, *K* split Bregman iterations require only a total of 2*K* (1 + *N*) evaluations of the forward and adjoint Radon transform; as these two transforms are the most expensive to evaluate, we in practice look to reduce *N* while still maintaining a good convergence. For a square *μ*, let *N*_{x} be the number of pixels in either direction and *N*_{θ} be the number of angular measurements taken. For fixed *M*, then, the complexity of computing *rhs*^{k}, *μ*^{k+1}, and $I^k+1$ is dominated by the $O(Nx2N\theta )$ Radon transform and its adjoint, as the Laplacian operators are $O(Nx2)$. The update rules for $sk,\u2009dxk,\u2009dyk,\u2009bxk,\u2009byk$ are explicit and $O(Nx2)$. Thus, the overall complexity of an iteration scales with *MN*_{x}*N*_{r}*N*_{θ}. Anecdotally, *M* can be very small; in the results of Sec. III, we use *M* = 5, 10 and still obtain reconstructions which remove the artifacts typically associated with under-resolved angular measurements.

We should briefly remark on the importance of properly constructing *R*^{*}. It is common to construct an approximate inverse Radon transform which often includes a filter or mask to improve the approximate inverse (see, e.g., Ref. 40 for recent work in this direction). In the case where angular measurements fully resolve the data, this approach works reasonably well for our algorithm. However, we find that it tends to require a rather large number of split Bregman iterations to achieve a good reconstruction. Furthermore, in the case where *M* corresponds to subselecting angles (that is, when few angular measurements are taken), this approximation of *R*^{*} breaks down. The close connection between *R*^{*} and the approximate inverse Radon transform no longer holds, resulting in Algorithm 1 diverging and leading to arbitrarily large and oscillatory *μ*. Typically, implementing a discrete Radon transform consists of three separate operations for a series of angles: rotation of data by the angle, interpolation of this result to an appropriate grid, and summation of the interpolated data over the relevant axis. Construction of *R*^{*} requires taking the proper adjoint of each of these operations. Anecdotally, properly determining the adjoint of the interpolation operation for the particular geometry of the data set is vital to obtaining a convergent and efficient Algorithm 1.

: Initialize k = 0, μ^{0} = R^{*}M^{*}(I/I_{0}), $I^0=(I/I0)$, and $bx0=by0=dx0=dy0=0$ | ||

: while ||MR_{θ}μ^{k} − I/I_{0}||_{2} > σ | ||

: $rhsk=\alpha R\theta *MTI^k+\lambda (\u2207x)T(dxk\u2212bxk)+\lambda (\u2207y)T(dyk\u2212byk)$ | ||

: $\mu k+1=\alpha R\theta *M*MR\theta \u2212\lambda (\u2207x)T\u2207x\u2212\lambda (\u2207y)T\u2207y\u22121rhsk$ | ||

: $sk=|\u2207x\mu k+bxk|2+|\u2207y\mu k+byk|2$ | ||

: $dxk=max(sk\u22121/\lambda ,0)\u2207x\mu k+bxksk$ | ||

: $dyk=max(sk\u22121/\lambda ,0)\u2207y\mu k+byksk$ | ||

: $bxk=bxk+1+(\u2207xfk+1\u2212dxk+1)$ | ||

: $byk=byk+1+(\u2207yfk+1\u2212dyk+1)$ | ||

: $I^k+1=I^k+(I/I0)\u2212MR\theta \mu k+1$ | ||

: k = k + 1 | ||

: end |

: Initialize k = 0, μ^{0} = R^{*}M^{*}(I/I_{0}), $I^0=(I/I0)$, and $bx0=by0=dx0=dy0=0$ | ||

: while ||MR_{θ}μ^{k} − I/I_{0}||_{2} > σ | ||

: $rhsk=\alpha R\theta *MTI^k+\lambda (\u2207x)T(dxk\u2212bxk)+\lambda (\u2207y)T(dyk\u2212byk)$ | ||

: $\mu k+1=\alpha R\theta *M*MR\theta \u2212\lambda (\u2207x)T\u2207x\u2212\lambda (\u2207y)T\u2207y\u22121rhsk$ | ||

: $sk=|\u2207x\mu k+bxk|2+|\u2207y\mu k+byk|2$ | ||

: $dxk=max(sk\u22121/\lambda ,0)\u2207x\mu k+bxksk$ | ||

: $dyk=max(sk\u22121/\lambda ,0)\u2207y\mu k+byksk$ | ||

: $bxk=bxk+1+(\u2207xfk+1\u2212dxk+1)$ | ||

: $byk=byk+1+(\u2207yfk+1\u2212dyk+1)$ | ||

: $I^k+1=I^k+(I/I0)\u2212MR\theta \mu k+1$ | ||

: k = k + 1 | ||

: end |

## III. NUMERICAL IMPLEMENTATION

We present here the results of implementing the TV-penalized reconstruction via split Bregman-based methods. After performing some tests on synthetic data—the Shepp-Logan phantom—we present reconstructions from measurements on two different samples: a steel-aluminum cylinder and a fuel injector. These latter data sets are three-dimensional, as is the quantity of interest: the value of *μ* on each voxel. We solve in parallel the reconstruction of 2-dimensional slices where the normal of the slice is the axis of rotation about which angular measurements are taken. This results in horizontal slices of *μ* which may then be stacked vertically to generate the volumetric attenuation coefficient. For the data in Secs. III C and III D, neutron computed tomography was performed at the Oak Ridge National laboratory (ORNL) High Flux Isotope Reactor (HFIR) CG-1D neutron imaging beamline.^{41} In order to test the performance of inexact split Bregman with sparse angular measurements, we compare the reconstructions using a large number of projections against reconstructions using a reduced set of evenly spaced projections. The software implementation of the algorithm was developed to be deployable on a hybrid high performance computing system. Reconstructions of volumetric data sets of size 2049 × 2049 × 2049 with up to 900 measurements typically take approximately 5 min using split Bregman on Titan at the Oak Ridge Leadership Computing Facility at ORNL where each slice is processed on a single computational node. Specifically, the software includes a graphical processing unit (GPU) implementation of the forward and adjoint Radon transforms. On smaller systems with GPU accelerators, similar reconstruction times per slice are observed; the total time then becomes a function of the number of cores available to run in parallel since each slice requires one core. On a smaller server with 44 cores (and accelerators), reconstructions typically take under half a day. Thus, even on smaller machines, reconstructions typically take less time to perform than that required for performing a handful of angular measurements.

### A. Synthetic data

We first perform the reconstruction algorithm on the 257 × 257 Shepp-Logan phantom using 101 evenly spaced angular measurements; the phantom and the associated sinograms from these measurements are shown in Fig. 2. We choose this number of angular measurements to firmly be in the undersampled regime, corresponding to a measurement frequency 1/16-th of that obtained from Shannon’s sampling theorem.^{42} First, we focus on establishing the performance of the split Bregman-based reconstruction when we do not fully solve the matrix inversion in (7). To do this, we perform the split Bregman algorithm with different choices of the maximum number of CG steps allowed in the computing of the *μ* update. The results for varying the numbers of CG steps used are given in Fig. 3. Regardless of the number of CG steps used in computing the *μ*^{k+1} update, the algorithm converges. However the quality of the reconstruction after a given number of iterations increases with the number of CG steps. After 200 iterations, there is a difference of 2 orders of magnitude in the accuracy of the reconstruction when using 10 CG steps and 100 CG steps. However the latter uses roughly 20 times the number of evaluations of a forward or backward Radon transform. One can see that after approximately 100 iterations using 10 CG steps per iteration one achieves an accuracy of around 10^{−4} error; a similar error can be reached after approximately 50 iterations using 100 steps per iteration. However, the time needed for performing those 50 iterations is nearly 20 times that needed for performing 100 iterations using 10 CG steps. This is due to the cost of repeated evaluations of the Radon transform and its adjoint dominating the costs of the computations in the Bregman algorithm.

Besides choosing the number of CG steps to use per iteration, the other primary parameter to be selected is *α*. The method of selecting the proper weight between fidelity and regularization terms is in general often an involved process, beyond the scope of this work; in many problems it remains more of an art relying on the expertise of the user. We show here the results of performing the split Bregman algorithm with different values of *α*; the results are shown in Fig. 4. As expected, with too large *α*, the fidelity term in the objective in (3) dominates and some high frequency artifacts can begin to appear, as seen in Fig. 4(j). With too small an *α*, the regularizer dominates the reconstruction, suppressing some features, such as the smaller features in the phantom, as seen in Fig. 4(b).

We also perform the same tests in the case of noised sinograms. The sinograms are additively noised by white noise; the resulting data have a signal-to-noise ratio (SNR) of 40 dB. We see in Fig. 5 that the added noise does not affect the ability of the algorithm to converge regardless of the number of CG steps used in each *μ* update. In Fig. 6, we see that, again, varying *α* changes the reconstruction in the same way as in the noiseless case. However, the additional noise in the given data results in the need to change the scale of *α*. At lower SNR levels in the sinograms, the range of *α* giving reasonable reconstructions becomes narrower and thus harder to find. In such cases, preprocessing of the sinograms becomes necessary, in our experience.

### B. Preprocessing

We perform two main preprocessing steps to the non-synthetic data below before solving the tomographic inverse problem (2). In order to reduce streaking in the reconstructed *μ*, we apply a filter to ln(**I**/**I**_{0}) which replaces pixels sufficiently deviating from the mean of their 8 neighbors as in Ref. 43. If a pixel deviates from the mean of its neighbors, we replace its value with that mean. The second step corrects for possible shifts in the axis of rotation. In the presence of errors in the rotation, reconstructions using filtered back-projection may cause “tuning fork”-type artifacts to appear.^{44} Anecdotally, the algorithm in Sec. II often views such artifacts as features of the data which are to be preserved, instead of as noise to be suppressed. We thus apply to the sinograms the algorithm for correcting such errors presented in Ref. 44.

### C. Cylindrical sample

First, we consider a cylindrical geometry sample made with two metals, aluminum surrounding a stainless steel rod, which have different neutron attenuations. This sample was chosen as its geometry provides a reasonable benchmark along horizontal slices. A schematic of the sample is shown in Fig. 7. The cylindrical sample was rotated at a step angle of Δ*θ* = 0.5° between 0° and 182.6°; a neutron radiograph of 90 s was collected. This step angle was chosen in order to obtain a reasonably good reconstruction via filtered back projection using a parallel beam (an approximation that is valid for a neutron beam with low divergence); despite the undersampling in the angular variable (with respect to the Nyquist rate), the resulting high frequency is relatively minor.

We show in Fig. 8 the attenuation obtained after applying either a filtered back projection, Simultaneous Algebraic Reconstruction Technique (SART) (using the implementation of Ref. 45), or the algorithm from Sec. II to a horizontal slice. In each case, we expect an outer annulus of low attenuation (corresponding to the aluminum) surrounding a disc of high attenuation (corresponding to the steel). The reconstructions in the first row are performed using all available portions of ln(**I**/**I**_{0}). In Fig. 8(a), we show the result of using a filtered back projection, while in Fig. 8(b), we show the result of using SART. In Fig. 8(c), we show the result of using 200 split Bregman updates and 5 conjugate gradient steps to approximate the update in *μ*^{k+1}; we set *α* = *λ* = 1 in this case. The pictures seem to be in agreement, as expected. However, looking at the values of each along the green dashed lines, we see that the filtered back projection gives some high frequency artifacts; this is shown in Fig. 8(d). Using a spacing of Δ*θ* = 0.5° results in an under-resolved (with respect to the Shannon sampling theorem) regime for the entire image as the sinograms are 2048 pixels per projection, with the cylindrical sample occupying 900 pixels per projection, thus explaining the presence of small oscillatory artifacts.

The same procedure is performed in the second row of Fig. 8; however, we use an *M* which selects every 16th angle only, corresponding to uniformly reducing the number of angular measurements taken for a reconstruction with a spacing of Δ*θ* = 8.8°. This undersampling results in a filtered back projection with significant artifacts, as is seen in Fig. 8(e). The reconstruction from SART, seen in Fig. 8(f), also exhibits significant high frequency artifacts, while also producing blurring at the annular interface; further iterations using this technique continued to produce reconstructions with such artifacts. However, the total variation penalty nearly completely suppresses such artifacts, resulting in the reconstruction shown in Fig. 8(g). The improvement can be seen even more clearly in the profiles of each reconstruction in Fig. 8(h). Here we see that the under-resolved angular measurements lead the reconstruction from FBP to suffer from large oscillations in the attenuation coefficient and the SART reconstruction to blur the interfaces. Meanwhile, the TV reconstruction is again able to suppress these artifacts and recover the expected geometry of the sample.

### D. Gasoline direct injector

We also perform reconstruction on a more complex geometry, a gasoline direct injection (GDI) fuel injector, which has seen increased deployment in spark ignition engines.^{46,47} The physics of GDI fluid dynamics is strongly dependent on native injector geometry, manufacturing tolerance, and wear. To improve the design, production methods, and materials of GDI injectors, a strong understanding of the internal “as-produced” geometries is required. Specifically, internal injector geometry dramatically affects the fuel spray and associated system behavior.^{48,49} The unique capability of neutrons to penetrate the injector and visualize hydrogen-rich fuel flow enables the connection of the internal and external flows to more fully describe the physics of the fuel spray.

In Fig. 9, we show the reconstructed attenuation [via FBP and solving Eq. (4)] for a fuel injector along a horizontal slice close to the base of central rod in the injector. Again, we first use all angular measurements taken (corresponding to a sampling rate of Δ*θ* = 0.19°) and perform FBP to obtain the attenuation in Fig. 9(a). Again, this step angle is chosen to obtain a reconstruction. We see in Fig. 9(b) that applying instead TV-based regularization (using the same parameters as above) gives an attenuation which largely agrees with the FBP reconstruction; this is also clear in comparing the profiles of each in Fig. 9(c). Next, we again consider the reconstructions one gets using subsampled angular data. We now take an angular sampling rate of Δ*θ* = 3.1°. The resulting FBP-derived attenuation is shown in Fig. 9(d). Here we see again high frequency artifacts, especially near the outer edge of the sample. Although they appear to be of relatively low magnitude, the effect of these artifacts can be more easily seen in the profile of the attenuation in Fig. 9(f). We also obtain an attenuation via split Bregman; this is shown in Fig. 9(e), which agrees largely with the attenuations obtained from the full set of angular measurements. We again see the reduction of artifacts, especially the oscillations in the background near the edge of the injector.

We next repeat the reconstruction of the attenuation via FBP and TV across all horizontal slices; this was performed using both the full set of angular measurements and the subsampled (again with Δ*θ* = 3.1°) data. In Fig. 10, we use all of these slices to generate a volumetric attenuation; a vertical cross section down the center of the resulting reconstruction of *μ* is shown. We see again agreement between FBP [in Fig. 10(a)] and TV [in Fig. 10(b)] when all measurements are available. However, in Fig. 10(c), we see that using FBP on the slices in the case of subsampled data results in the high frequency artifacts noted above throughout the volume. In keeping with the above, the TV regularization suppresses this throughout the volume, as seen in Fig. 10(d).

Finally we demonstrate the ability of TV-based reconstruction to preserve fine detail structures in the case of extremely undersampled data. In Fig. 11, we show the results of reconstructing a horizontal slice of the fuel injector at its nozzle. We show a detail of this reconstruction of the central 202 × 202 portion of the slice. We are interested in recovering the positions of the channels in the nozzle of the injector, under extreme subsampling. Our motivation is that in applications where dynamic images are required, acquisition times are increased; it may be possible to (partially) offset this by reducing the required number of angular measurements.

We show the result of performing FBP using an angular spacing of Δ*θ* = 0.19°, in Fig. 11(a). If we only take 15 angular measurements with spacing of Δ*θ* = 12.8°, the attenuation coefficient from FBP, shown in Fig. 11(b), suffers from the coarseness of the data. Significantly, the locations and sizes of the channels are relatively obscure due to the large artifacts visible.

We show the result of taking this coarse data and performing TV-based regularization in Fig. 11(c). Letting *α* = 20, we take a maximum of 10 conjugate gradient steps to calculate each update in *μ*; 200 Bregman iterations are used. We see that while the TV-based reconstruction exhibits staircasing, seen in the shape of the channels and the outer ring (generated by neutron refraction), the position of the channels is preserved and given a stronger contrast than using FBP while also removing the significant artifacts that are present in FBP. We note that staircasing is not unexpected due to the extremely low resolution (a reduction by a factor of 64 from the reference) in the angular measurements resulting in very coarse data for fitting. In general, it is unclear at which point angular undersampling will result in poor reconstructions. At fine spatial resolutions, the high frequency artifacts–which undersampling introduces–may appear as features near regions where the attenuation coefficient has a low contrast with the back ground medium. In this case, the low rank of the fidelity term in (3) will lead to poor reconstructions. However, the step angle where this occurs will depend on the geometry of the sample being measured. Additionally complicating this lack of clarity is that the choice of *α* can often dramatically alter the quality of the reconstruction. As mentioned previously, choosing *α* remains an art; it is possible in some cases that appropriately choosing the parameter will encourage suppressing the high frequency artifacts in without suppressing low-contrast samples. For more general materials, therefore, the maximum step angle needed for a good reconstruction remains an open question.

## IV. CONCLUSIONS

Total variation-based reconstruction is suitable in neutron-based computed tomography, reducing and removing artifacts which arise from applying filtered back projection methods to under-resolved samples. Inexact split Bregman is especially suitable for the efficient solution of the total variation-penalized problem. Only a very small number (less than 10) conjugate gradient steps are needed to generate an acceptable update direction, reducing the computational overhead needed in iterative procedures by reducing the number of calls to the Radon transform and its adjoint. This reconstruction technique is especially effective in situations where the number of angular measurements taken is limited due to time constraints or difficulties with the sampling environment; filtered back projections generate significant high-frequency artifacts in such cases which are not present in the total-variation based reconstructions. This can be seen even in situations where an extremely limited number of angular measurements are taken. In these cases, staircasing is inevitable, but the ability of TV-based reconstruction to identify the location of features even in the severely under-resolved case suggests its suitability for future problems in more dynamic neutron-based computed tomography applications.

## ACKNOWLEDGMENTS

Part of this research conducted at the High Flux Isotope Reactor was sponsored by the Scientific User Facility Division, Office of Basic Energy Sciences, U.S. Department of Energy. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).