This article strives to expand on existing work to demonstrate advancements in data processing made available using event mode measurements. Most spallation neutron sources in the world have data acquisition systems that provide event recording. The new science that is enabled by utilizing event mode has only begun to be explored. In the past, these studies were difficult to perform because histograms forced dealing with either large chunks of time or a large number of files. With event based data collection, data can be explored and rebinned long after the measurement has completed. This article will review some of the principles of event data and how the method opens up new possibilities for in situ measurements, highlighting techniques that can be used to explore changes in the data. We also demonstrate the statistical basis for determining data quality and address the challenge of determining how long to measure mid-measurement. Finally, we demonstrate a model independent method of grouping data via hierarchical clustering methods that can be used to improve calibration, reduction, and data exploration.

By utilizing time-of-flight methods, spallation neutron sources (SNSs) can continuously gather much wider energy-dependent spectra than monochromatic approaches. These time-of-flight sources are increasingly reliant upon the so-called event based data measurement modes,1 where the detection of each individual neutron count is recorded, as opposed to being recorded only within a histogrammed spectrum. Although the individual events are later rebinned into classical histogram representations (e.g., measured intensity vs. d-spacing), the event based data format remains stored at the facility such that it can be re-reduced or examined in new ways at a later date.

An obvious advantage of event based data modes is the ability to rebin and examine the measured scattering in different schemes ex post facto, which can be particularly useful for in situ studies. For example, if a material unexpectedly undergoes a structural change or transition at an unexpected time (e.g., during a heating ramp), the data can be parsed selectively to identify scattering before and after such a change and not forced into a collectively merged single spectrum. There are numerous such recent examples from the NOMAD2 and POWGEN3 beamlines at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL).4–8 Beyond slicing data in time, scattered data can be reduced according to the sample environment condition recorded at the time of the measurement (referred to herein as a sample log) such as temperature, pressure, applied electric field, etc. Event based data have the potential to be utilized in advanced data reduction and analysis analytics, particularly through the use of machine-learning beamline methods, for example, automatic grouping of measured data via hierarchical clustering.

In this paper, we demonstrate some of the fundamental considerations in efficiently grouping event based data, introducing the concept of “fat events.” We also review the underlying statistics which govern the ultimate data quality for neutron diffraction measurements and correlate these with considerations of measurement time. Finally, we demonstrate a model-independent method of automatically grouping spectra and its utility for both error checking and data clustering.

The Poisson distribution describes the probability, P, of observing a given number of events (i.e., ν detected neutron counts) within a given time period if the average number of events expected in such a time period is μ such that

Pμ(ν)=eμμνν!.
(1)

This is a simplified form of binomial distribution in the case of a large number of trials. The variance for the Poisson distribution is simply σ2 = μ. Note that in relation to event based neutron scattering data, this will hold true only for raw detector counts and not necessarily for the fully reduced data.

A correct characterization of “time period” is required for this representation to make sense. The number of detected neutrons depends not only on the scattering cross section but also on the incident flux at the sample. This is typically accounted for by dividing data by a characteristic of the number of incident neutrons, such as the integrated proton charge (e.g., micro-amp hours) or integrated incident monitor flux. In effect, “time” speeds up or slows down depending on the incident flux on the sample. Hence, many data acquisition systems will characterize a single measurement by the accumulated proton charge on the target, as instantaneous beam power delivered to the target at a spallation neutron source frequently varies.

It follows naturally that any observed value which obeys a Poisson distribution will have a measured uncertainty that falls as T−1/2, where T is the measurement period.9 As both the number of neutron counts at detectors and the accumulated incident flux increase with time, neutron scattering event detection can, to first order, be considered to behave according to Poisson statistics. This is the origin of the general beam line adage that one must measure 4 times as long to improve statistics by a factor of 2. Note that there are potentially other systematic sources of error in a neutron scattering measurement, and this rule only applies to the improvement of uncertainty which are derived from the quantities obeying Poisson statistics.10 

With neutron event mode data collection,1,11 the time-of-flight, detector identifier, and wall clock time of the proton pulse the event is associated with are recorded for every event. This last value is commonly referred to as the event’s pulse time. Using the pulse time of every event and correlating it with the sample environment logs, one can split or slice the data after the acquisition is completed. The SNS at ORNL operates at a nominal 60 Hz. This means that time slicing schemes that are significantly larger than 160 s can employ this simple approach. For faster varying information (e.g., high-frequency electric field, pulse magnets), not only a correction must be made to calculate the time when the neutron was at the sample position but also a correction must be made for the incident flux on the sample. However, most sample environments have a characteristic time scale which is much larger than an individual time-of-flight frame associated with a proton pulse. In this contribution, we focus on slicing data from these “slower” sample logs.

Data stored in raw event mode can be cumbersome to reduce to traditional forms, largely based on the inherent memory requirements associated with the data format. Significant compression and process speed-up comes from storing the data as a sparse histogram of “weighted events,” where each bin represents a given time-of-flight value, with a weight equal to the number of events that fall within that binning scheme.11 Beyond time-of-flight, weight (or counts), and associated uncertainty, each weighted event is associated with the detector pixel it was measured at. With this strategy, the pulse time is discarded.

For in situ measurements, where reduction schemes may be varied ex post facto, it is useful to introduce an additional data storage scheme that functionally sits between the raw event format and the fully histogrammed weighted events. We here introduce the concept of “fat events,” which are collections of sparse histograms that are only partially compressed against pulse time. As the characteristic timing associated with many sample environment logs vary on the order of seconds (or longer), which is much greater than the associated pulse time measurement tolerance (100 ns), significant gains in compression can be achieved by selecting binning schemes of weighted data in accordance with the characteristic time of sample environment log variance, which naturally encompasses many individual event pulse times. Thus, the “fat event” representation is a collection of sparse histograms, one for each effective pulse time. This additional field for pulse time can provide savings in memory over raw events but still allows for slicing the data later. Figure 1 is a modified version of Fig. 3 from the work of Peterson11 with additional lines for fat events with varying wall clock tolerances. The vertical axis describes a “relative compression factor” which is the ratio of the number of compressed to number of uncompressed (or raw) events (i.e., compressed/uncompressed). This is limited to be between zero (no events at all) and one (each event is unique and cannot be compressed).

FIG. 1.

The data presented is from a water (in an Al holder) sample measured on the instrument VISION12 as measured through the diffraction detectors as a function of measurement time with constant incident flux. The different wall clock tolerances are plotted for the detector pixels kept separately (a) and for the detector’s time focused into two back-scattering and 90-degree “banks” (b). At a value of one, the uncompressed line [not visible in (a)] represents storing all of the events separately. The black dashed compressed line is the minimum amount of weighted events needed to store the information that comes from ignoring the wall clock. The compression factors are different between (a) and (b) because there are not enough events in a single pixel to make the compression worthwhile. This was modified from Fig. 3 of Ref. 11.

FIG. 1.

The data presented is from a water (in an Al holder) sample measured on the instrument VISION12 as measured through the diffraction detectors as a function of measurement time with constant incident flux. The different wall clock tolerances are plotted for the detector pixels kept separately (a) and for the detector’s time focused into two back-scattering and 90-degree “banks” (b). At a value of one, the uncompressed line [not visible in (a)] represents storing all of the events separately. The black dashed compressed line is the minimum amount of weighted events needed to store the information that comes from ignoring the wall clock. The compression factors are different between (a) and (b) because there are not enough events in a single pixel to make the compression worthwhile. This was modified from Fig. 3 of Ref. 11.

Close modal

As an example of fat events, imagine an experiment where all of the neutrons are observed in a single narrow peak. In this case, each newly arrived event’s time-of-flight will be the same as an existing weighted event’s. The existing weighted event’s weight and uncertainty would be changed, and the new event would be discarded as it is accounted for.

Empirically the relative compression factor follows the form of

A2+A1T+A0
(2)

with T being the wall-clock time and the constants Ai being freely fit. The main point is that the compression performs proportional to T−1/2. This is in agreement with the traditional view that neutron scattering adheres to Poisson statistics as described in Sec. II. An important feature of compressing data is that the data will only compress if new events have the same, within tolerance, time-of-flight as data that are already recorded. Compressed data requires fewer unique time of flight values to describe it.

Assume that a sample environment log value (e.g., temperature) has a time dependence given by L(t). For older, histogram based neutron data acquisition systems, it is common to record these log values at a single point in the total measurement time (often either at the beginning or at the end of the measurement). In event based measurements, one typically records L(t) over all times, to allow for later data filtering. This section will explore different ways of summarizing the sample environment log value of using three different techniques: simple mean (μ), time averaged mean (μT), and flux averaged mean (μP).

Since the records are made at discrete times ti, a common way to calculate the simple mean, μ, and variance, σ2, of L(t) is by inspecting the sample log values themselves,13 

μ=1NiNLi
(3)

and the associated variance10 

σ2=1NiNLiμ2.
(4)

Here Li = L(ti). This will give the wrong result for quantities that are not recorded with a constant frequency. Instead, the time averaged mean will produce a more accurate result

μT=0TL(t)dt0Tdt
(5)

and the corresponding variance

σT2=0TL(t)μT2dt0Tdt.
(6)

Filtering, or splitting up the measurement, can be accounted for as well by introducing a masking function, M(t). Here, we assume that the mask will have values of either 1 (when the neutron events are relevant) or 0 (otherwise) and is applied to both the neutrons and the sample environment logs. In the presence of a mask, Eqs. (5) and (6) should be rewritten by replacing dt with M(t)dt.

In real situations, the underlying measurement is tied to the incident flux Φ(t) rather than to a clock on the wall. Frequently, experimenters will normalize the diffraction pattern by the total incident flux but will use a time-averaged value of a log. This is inconsistent, but generally of minor consequence. To introduce the flux weighted average of log, first we must define the normalized flux

ϕ(t)=Φ(t)M(t)0TΦ(t)M(t)dt.
(7)

Using this function allows for determining mean and variance using traditional techniques for continuous variables13 

μP=0Tϕ(t)L(t)dt
(8)

and the associated variance

σP2=0Tϕ(t)L(t)μP2dt.
(9)

These reduce to the time average and variance in the case of constant flux, Φ(t) = P, using all of the recorded information M(t) = 1 for 0 ≤ tT1T and zero elsewhere. Then the normalized probability is simply

ϕ(t)=PM(t)0T1Pdt=M(t)T1.
(10)

The mean and variance are then of the expected form

μP=0TM(t)T1L(t)dt=1T10T1L(t)dt=μT,
(11)

and similarly,

σP2=1T10T1L(t)μ2dt=σT2.
(12)

For the case of constant incident flux, this is the time average value and the time average variance.

We demonstrate with an example where L(t) is taken as the total measured neutron counts per second (Fig. 2, middle) over all detectors on the POWGEN instrument during the initial run cycle turn-on period, when the delivered power to the target is being slowly ramped up (Fig. 2, top). To first order, there is a linear relationship between proton power on the target and neutrons generated, as seen by the normalized ratio of these two quantities (Fig. 2, bottom). Note that this behavior deviates slightly at very early times (t < 1.5 h) when the incident spectrum generated by the moderator is not yet in equilibrium (observed as an increase in delivered neutrons per pulse).

FIG. 2.

The proton charge per second, neutron counts per second, and the ratio of the two (normalized to one) as measured on POWGEN using a diamond sample. The data were the first measurement after a long summer shutdown while various parameters of the accelerator and moderator were varying. This provides the opportunity to verify that data change with those conditions.

FIG. 2.

The proton charge per second, neutron counts per second, and the ratio of the two (normalized to one) as measured on POWGEN using a diamond sample. The data were the first measurement after a long summer shutdown while various parameters of the accelerator and moderator were varying. This provides the opportunity to verify that data change with those conditions.

Close modal

Using the counts per second as L(t), one can see the difference in employing the various averages in Fig. 3. As with many other examples in this article, when the incident neutron flux is constant, all three measurements of the uncertainty follow a trend of T−1/2.

FIG. 3.

Cumulative average and uncertainties of the neutron counts per second shown in Fig. 2 according to Eqs. (3)–(6), (8), and (9). Interpret the values to be the average number of counts per second up to that point. μ and μT have a small offset due to the brief beam outage at the end of the first hour.

FIG. 3.

Cumulative average and uncertainties of the neutron counts per second shown in Fig. 2 according to Eqs. (3)–(6), (8), and (9). Interpret the values to be the average number of counts per second up to that point. μ and μT have a small offset due to the brief beam outage at the end of the first hour.

Close modal

Since the neutron count rate was recorded at a constant rate of once per second, it is not surprising that the average μ and time average μT are the same (green and blue lines, respectively). The incident flux weighted average has a different dependence, due to the behaviour in the beginning of the time interval, when the accelerator was ramping up after an extended shutdown. This is also when the ortho/para-hydrogen ratio is being optimized for the run cycle.14 After the first hour, once the proton charge (and incident spectrum) has stabilized, the values and uncertainties follow the T−1/2 trend. In practice, one would filter/remove the data from the first hour. Figure 3 highlights how accepting a relatively small amount of unstable data into an otherwise stable measurement has a strong effect. Another example is presented in  Appendix A with greater detail.

The question of “how long to measure” in a neutron scattering experiment is often answered by estimating expected scattering rates via the sample mass and expected scattering power (from literature values15,16) or by measuring a characteristic sample for a short time and extrapolating. Other heuristics can be applied such as measuring to a specific statistical cutoff (e.g., Toby and Egami17 suggests 106 counts over a range of data to obtain an acceptable uncertainty in reduced data) or simply comparing the current sample to how long a related sample was previously measured.

It has been long suggested that neutron diffraction data are over-measured. This is understandable, as the consequences of having insufficient counting statistics often outweigh the potential gains of dividing limited beam time among additional samples. However, if one can determine a way to match the measurement time to the statistical requirements of a sample, this will enable both a more efficient use of beam time and an appropriate degree of confidence in the measured data.

Ideally, the correct criterion for stopping a measurement is when the uncertainty in a parameter from fit or refinement18 is below a defined level. For example, consider the ideal case during powder diffraction measurements and Rietveld refinements.19,20 Here, one could imagine that data would be reduced and refined during the measurement itself, and dynamic review of the resulting Rwp and structure parameters would influence the measurement stopping criteria (e.g., when the Rwp gets below an expected value). At present, this strategy is often impractical. Reducing the data, getting it to an appropriate analysis program, automating that program, and feeding information back to the data acquisition require several different systems to act in concert in addition to having a reasonably good understanding of the structure. Figure 4 shows this strategy applied to the first 4 h of diamond measurement presented in Sec. IV (while the accelerator was changing power) long after the measurement was complete with the results compared to refining the full 12 h measurement providing the “known” structure. As described above, during the first hours of the measurement, the moderator was out of equilibrium, changing the incident spectrum of delivered neutrons. This is the reason for the variance in the fit 1 min slice results. While Fig. 4 demonstrates that the parameters themselves are greatly affected, the uncertainties in the parameters are driven more by the number of neutrons counted. Note that the parameters of the refinement vary until the incident spectrum has stabilized and then move quickly to their asymptotic values. Accepting “bad” data for the sake of measuring more neutrons has an adverse effect on the data as a whole.

FIG. 4.

Results of GSAS refinements of the first 4 h of the diamond data. Both refinements of 1 min slices and the cumulative data are shown. The dotted line shows the refinement result of the full 12 h of data. The uncertainties in the parameters are not shown due to being too small. Variance in Uiso ≤1 × 10−4, in scale ≤2200, and in a ≤ 51 × 10−5.

FIG. 4.

Results of GSAS refinements of the first 4 h of the diamond data. Both refinements of 1 min slices and the cumulative data are shown. The dotted line shows the refinement result of the full 12 h of data. The uncertainties in the parameters are not shown due to being too small. Variance in Uiso ≤1 × 10−4, in scale ≤2200, and in a ≤ 51 × 10−5.

Close modal

Note that this degree of sensitivity to relative changes in data does not actually require a full Rietveld refinement. Indeed, for metrics defining convergence of data quality, much simpler data inspection methods can potentially be employed. We demonstrate this here by performing single-peak fits on this diamond dataset, specifically looking at the peak at d = 0.48 Å. This peak was fit using a pseudo-Voigt peak shape, in which the peak position, amplitude, and width were refined for each slice of data. The results of the fit are shown in Fig. 5.

FIG. 5.

Results for fitting a pseudo-Voigt function for the peak at d = 0.48 Å. Both refinements of 1 min slices and the cumulative data are shown. The dotted line shows the result of fitting the full 12 h of data.

FIG. 5.

Results for fitting a pseudo-Voigt function for the peak at d = 0.48 Å. Both refinements of 1 min slices and the cumulative data are shown. The dotted line shows the result of fitting the full 12 h of data.

Close modal

From these single peak fits, the characteristic changes due to shifting the incident moderator spectrum can be seen to occur up through the first hour of data collection. The simplicity of this approach lends its use to more applications while still providing more insight than traditional heuristics. Depending on the particulars of the phenomenon being studied, one could select a parameter of a characteristic peak or any such related metric.

Demonstrated here is an approach well suited for combining event-based data based on how similar it is. It has been described elsewhere21–23 that the exact method for clustering data is much less important than the choice of distance or similarity criteria used to inform that clustering. Once an appropriate criterion is chosen, further processing may be required in order to be used with a particular clustering technique, but the resultant clusters generated (often referred to as the different species) will be generally the same.

With diffraction patterns, it is desirable to employ a distance metric that can overlook subtle differences in scale that preserve relative intensities. Additionally, small uniform shifts in peak position, such as those due to temperature dependent unit cell expansion, are often of less interest in comparison to significant changes in structure due to processes such as phase-change or decomposition. de Gelder et al.24 explored the concept of spectra similarity metrics in relationship to powder diffraction data and introduced a cosine-type metric to describe the similarity between two spectra f(x) and g(x),

Sfg=wfg(r)cfg(r)drwff(r)cff(r)drwgg(r)cgg(r)dr,
(13)

where wfg(r) is a weighting function and cfg(r) is the cross-correlation function,

cfg(r)=f(x)g(x+r)dx.
(14)

The auto-correlation function, cff(r), is calculated when f(x) = g(x) and will have a peak at r = 0. The cross correlation is an asymmetric function where cfg(r) = cgf(−r). The weights in Eq. (13) must be symmetric (wfg(r) = wfg(−r)). Here, they are set to be a triangle function

wfg(r)=1|r|/l
(15)

when |r| ≤ l and zero elsewhere. The parameter, l, is a characteristic distance within which shifted features are not as strictly penalized. Other weighting functions may be selected, but this simple triangle form used here is effective. When two patterns are identical, the similarity function becomes unity, Sfg = 1.

de Gelder’s similarity metric is insensitive to absolute scale fluctuations between data due to the denominator in Eq. (13). The weighting function employed in the cross correlation can accommodate a degree of tolerance in absolute shifting of feature positions, such as those often encountered due to thermal expansion.

Most clustering algorithms require that the similarity matrix is symmetric and that the diagonal (self-similarity) is unity. Accumulation of numerical errors can cause slight errors along the diagonal that can lead to such algorithms failing. As such, we here explicitly set the diagonal to one and ensure symmetry by averaging with the transpose such that

Sfg=12Sfg+SfgT.
(16)

In the cases presented below, the absolute difference introduced by these two consistency operations from the explicit calculation is found to be of the order 10−16 and thus insignificant.

With a similarity metric calculated, it is straightforward to perform hierarchical clustering on the data. Many clustering algorithms consider a distance function rather than a similarity metric, which is related simply as

Dfg=1Sfg.
(17)

A distance of Dfg = 0 would indicate identical patterns.

Continuing to show the first 4 h of the diamond measurement above, Fig. 6 shows the symmetrized similarity matrix with species determined using the DBSCAN (density-based spatial clustering of applications with noise) method.25–27 This also highlights the period before the incident flux stabilizes as different from the rest of the measurement. With the selected density parameter, those regions are marked as “noise” by DBSCAN. We note the potential of utilizing such an approach to significantly enhance and automate calibration and reduction procedures on neutron diffraction beamlines, as the data are essentially self-interrogating for outliers.

FIG. 6.

Symmetrized similarity matrix comparing the spectra between first 4 h of diamond measurement from Figs. 2–5 (L = 5 bins, 2.1 Å−1Q ≤ 63 Å−1) with clustering assigned using DBSCAN (eps = 0.003) as an additional color bar above the figure. Each color represents a different species. The species identified are noise (purple) and stable (red). The axes are labeled with the time from the start of the run to label the spectra that are compared.

FIG. 6.

Symmetrized similarity matrix comparing the spectra between first 4 h of diamond measurement from Figs. 2–5 (L = 5 bins, 2.1 Å−1Q ≤ 63 Å−1) with clustering assigned using DBSCAN (eps = 0.003) as an additional color bar above the figure. Each color represents a different species. The species identified are noise (purple) and stable (red). The axes are labeled with the time from the start of the run to label the spectra that are compared.

Close modal

As a further example of using hierarchical clustering28,29 to explore data, we employ this method to a temperature dependent dataset of barium titanate (BaTiO3) measured on the NOMAD diffractometer2 from 100 °C to 500 °C at constant 10 °C intervals. Barium titanate is a canonical ferroelectric with a notable difference in local vs. long-range structural transitions, which has received significant attention in recent years.30,31 Previous work has shown the utility of employing model-free analysis methods to in situ studies of total scattering via the Combinatorial Appraisal of Transition States (CATS) method,32 which in part utilized a direct residual between data to determine distance metrics. We here take a similar approach by employing de Gelder’s similarity matrix. The results of calculating the similarity matrix across the dataset are shown in Fig. 7, where the expected phase-transitions (from R3m at low temperature to Amm2, P4mm, and finally Pm3¯m at high temperature) are noted as dashed vertical lines.

FIG. 7.

Symmetrized similarity matrix (L = 5 bins) comparing the spectra between 10 K slices of a BaTiO3 measurement for constant ΔQ bins with 4.5 Å−1Q ≤ 15 Å−1. The color bar above is assigned using hierarchical clustering (threshold = 0.052). Each color represents a different species. The axes are labeled with the temperature that the spectra were measured at.

FIG. 7.

Symmetrized similarity matrix (L = 5 bins) comparing the spectra between 10 K slices of a BaTiO3 measurement for constant ΔQ bins with 4.5 Å−1Q ≤ 15 Å−1. The color bar above is assigned using hierarchical clustering (threshold = 0.052). Each color represents a different species. The axes are labeled with the temperature that the spectra were measured at.

Close modal

In hierarchical clustering, the different species of data can be determined by setting a characteristic distance cutoff such that data with distances less than the cutoff are clustered together. In the case of the barium titanate data, a cutoff of 0.01 was employed, which produced four distinct species. The calculated distances and resultant clusters are shown as a dendrogram in Fig. 8. A subset of the average scattering patterns, based on these four species clustering and colored correspondingly with the dendrogram, are shown in Fig. 9. We note that the method has correctly separated the four distinct phases. This demonstrates the utility of such an approach not only for calibration and reduction procedures on instruments but also for data exploration and model-free analysis.

FIG. 8.

Dendrogram representation of data presented in Fig. 7. The vertical axis is the de Gelder distance of the spectra. The colors denote how the species were grouped together when using a cutoff distance of 0.01.

FIG. 8.

Dendrogram representation of data presented in Fig. 7. The vertical axis is the de Gelder distance of the spectra. The colors denote how the species were grouped together when using a cutoff distance of 0.01.

Close modal
FIG. 9.

A subset of the resultant average scattering patterns from barium titanate data from utilizing clustering demonstrated in Fig. 8. The selected Q-range shown to emphasize the scale of differences the employed clustering algorithm successfully differentiated across.

FIG. 9.

A subset of the resultant average scattering patterns from barium titanate data from utilizing clustering demonstrated in Fig. 8. The selected Q-range shown to emphasize the scale of differences the employed clustering algorithm successfully differentiated across.

Close modal

Since the initial results of neutron event data processing were published,33,34 experience has provided new insights into utilizing this capability. We herein introduced the so-called fat events and demonstrated their utility to more readily inspect partially reduced data. A physically meaningful way to average sample environment measurements, accounting for the incident neutron flux, has been shown herein as well as a guidance to approach the question of when to stop a measurement.

The method of employing hierarchical clustering to automatically discriminate event based data has been demonstrated via neutron diffraction data of diamond and barium titanate samples. This approach shows a model free technique to aid in data inspection and reduction methods, particularly when faced with large datasets from in situ experiments. The choice of the distance function can be made such that subtle changes in data (such as peak shifts due to thermal effects) are ignored when compared to more dramatic changes such as phase transitions. This work demonstrates the utility of further exploring such machine learning methods for use in scattering techniques, particularly when addressing large datasets and complex materials under in situ study.

The work of D.O. was funded by the BES Early Career Award, Exploiting Small Signatures: Quantifying Nanoscale Structure and Behavior KC04062, under Contract No. DE-AC05-00OR22725. Neutron scattering measurements are sponsored by the User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy. Portions of this research used data from the NOMAD, POWGEN, and VISION instruments at the Spallation Neutron Source. The authors would like to thank Ashfia Huq for providing the POWGEN data, Katharine Page for providing the NOMAD data, and John Faber for insights into pattern matching metrics.

This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doepublic-access-plan).

In this appendix, the authors provide a second example to assist the reader in understanding Sec. IV using a more traditional sample environment log, temperature. The data provided here are artificial, but a reasonable analog of a real experiment. Here, we simulate a measurement using a sample environment that should fix the temperature at 200 K for 3 h. However, assume an unanticipated hardware failure causes the temperature to increase rapidly to 220 K. The issue is discovered after 15 min, and the temperature is brought back down but only reaches 205 K. Unfortunately, once the temperature issue is fixed, an unexpected issue in the proton accelerator stops neutron production for 30 min before it is brought back online and continues at one third of its initial power, P. Figure 10 shows this information graphically for the simulated temperature log L(t), a simplified flux Φ(t), and mask M(t). As the data were measured at a facility that records data in event mode, the data need not be remeasured.

FIG. 10.

The temperature (a), relative accelerator power (b), and masking function (c) for this example. The blue and red regions in (a) denote the low and high temperature regions for the split data.

FIG. 10.

The temperature (a), relative accelerator power (b), and masking function (c) for this example. The blue and red regions in (a) denote the low and high temperature regions for the split data.

Close modal

Assume that the data from the sample at 220 K are considered outside the prescribed temperature range of interest and are to be discarded. The question remains what to do with the remaining data. The two most obvious approaches would be to either split the data between that measured at 200 K and 205 K or combine it all together to reduce overall uncertainty in the measured spectra. Since the decision will depend heavily on the phenomena being studied, we will present the mean and variance calculations for both the time averaged and flux average methods.

In calculating the time average values of Eqs. (5) and (6), the incident flux, Φ(t), is not required. Therefore, the values can be calculated by splitting into three regions

μT=0TM(t)L(t)dt0Tdt13h0h1h1200Kdt+1h1.25h0220Kdt+1.25h3h1205Kdt
(A1)

or μT = 203.7 K with an associated σT = 3.0 K. The integrals are not exact because they need to be evaluated numerically, in this case using Newton’s method.

Calculating the flux weighted values of Eqs. (8) and (9) requires splitting into more regions. Here both Φ(t) and M(t) are used. Using the function shown in Fig. 10, the denominator of Eq. (7) becomes

0TΦ(t)M(t)dt=0h1hP1dt+1h1.25hP0dt+1.25h3P31dt=P1.5h.
(A2)

Then Eq. (8) is

μP=0Tϕ(t)L(t)dt1P1.5h0h1hP1200Kdt+1h1.25hP0220Kdt+1.25h1.5h0P1205Kdt+1.5h3hP31205Kdt
(A3)

or μP = 201.4 K and σP = 2.2 K. From this example, one can see that μT treats all time independent of incident flux, where μP weights the first part of the measurement more because of the higher flux. This is understood because the low temperature region will have accumulated P·1 h neutrons, while the high temperature region will have only P·0.5 h neutrons. The difference in values is not statistically significant, but the weighting of the different temperatures is more intuitive.

If the decision is made to split the data into above and below temperature changes, the equations are similarly evaluated. However, our fictional data will result in μT = μP = 200.0 K with σT = σP = 0.3 K for the lower temperature. These values are equivalent because ϕ(t) = 1 in this region. This is not the case for the higher temperature where despite the longer measurement time, the incident flux is significantly smaller, ϕ(t)=23. The averages are the same within one significant digit, μT = μP = 205.0 K, but σT = 0.4 K and σP = 0.1 K. The increased precision from splitting into two temperatures is paired with, effectively, having two measurements.

1.
G. E.
Granroth
,
K.
An
,
H. L.
Smith
,
P.
Whitfield
,
J. C.
Neuefeind
,
J.
Lee
,
W.
Zhou
,
V. N.
Sedov
,
P. F.
Peterson
,
A.
Parizzi
 et al, “
Event-based processing of neutron scattering data at the spallation neutron source
,”
J. Appl. Crystallogr.
51
,
616
(
2018
).
2.
J.
Neuefeind
,
M.
Feygenson
,
J.
Carruth
,
R.
Hoffmann
, and
K. K.
Chipley
, “
The nanoscale ordered materials diffractometer NOMAD at the spallation neutron source SNS
,”
Nucl. Instrum. Methods Phys. Res., Sect. B
287
,
68
75
(
2012
).
3.
A.
Huq
,
J. P.
Hodges
,
O.
Gourdon
, and
L.
Heroux
, “
Powgen: A third-generation high-resolution high-throughput powder diffraction instrument at the spallation neutron source
,”
Z. Kristallogr. Proc.
1
,
127
135
(
2011
).
4.
D.
Olds
,
K.
Page
,
A.
Paecklar
,
P. F.
Peterson
,
J.
Liu
,
G.
Rucker
,
M.
Ruiz-Rodriguez
,
M.
Olsen
,
M.
Pawel
,
S. H.
Overbury
 et al, “
A high precision gas flow cell for performing in situ neutron studies of local atomic structure in catalytic materials
,”
Rev. Sci. Instrum.
88
,
034101
(
2017
).
5.
D.
Olds
,
K. V.
Lawler
,
A. A.
Paecklar
,
J.
Liu
,
K.
Page
,
P. F.
Peterson
,
P. M.
Forster
, and
J. R.
Neilson
, “
Capturing the details of N2 adsorption in zeolite x using stroboscopic isotope contrasted neutron total scattering
,”
Chem. Mater.
30
,
296
302
(
2017
).
6.
H.-W.
Wang
,
L. L.
Daemen
,
M. C.
Cheshire
,
M. K.
Kidder
,
A. G.
Stack
,
L. F.
Allard
,
J.
Neuefeind
,
D.
Olds
,
J.
Liu
, and
K.
Page
, “
Synthesis and structure of synthetically pure and deuterated amorphous (basic) calcium carbonates
,”
Chem. Commun.
53
,
2942
2945
(
2017
).
7.
Z.
Wu
,
S.
Lan
,
X.
Wei
,
D.
Olds
,
K.
Page
,
B.
Shen
, and
X.-L.
Wang
, “
Heterogeneous nucleation in Zr-Cu-Al-Ag metallic glasses triggered by quenched-in metastable crystals-a time-resolved neutron diffraction study
,”
Physica B
(published online).
8.
D.
Abeysinghe
,
A.
Huq
,
J.
Yeon
,
M. D.
Smith
, and
H.-C.
zur Loye
, “
In situ neutron diffraction studies of the flux crystal growth of the reduced molybdates La4Mo2O11 and Ce4Mo2O11: Revealing unexpected mixed-valent transient intermediates and determining the sequence of events during crystal growth
,”
Chem. Mater.
30
,
1187
1197
(
2018
).
9.
J.
Taylor
,
An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements
(
University Science Books
,
Sausalito, California
,
1997
), Chap. 11.
10.
B. N.
Taylor
and
C. E.
Kuyatt
,
Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results
(
Citeseer
,
1994
).
11.
P. F.
Peterson
,
S. I.
Campbell
,
M. A.
Reuter
,
R. J.
Taylor
, and
J.
Zikovsky
, “
Event-based processing of neutron scattering data
,”
Nucl. Instrum. Methods Phys. Res., Sect. A
803
,
24
28
(
2015
).
12.
P. A.
Seeger
,
L. L.
Daemen
, and
J. Z.
Larese
, “
Resolution of vision, a crystal-analyzer spectrometer
,”
Nucl. Instrum. Methods Phys. Res., Sect. A
604
,
719
728
(
2009
).
13.
P.
Bevington
,
Data Reduction and Error Analysis for the Physical Sciences
(
McGraw-Hill
,
Boston
,
2003
).
14.
T.
Hügle
,
E. B.
Iverson
, and
F. X.
Gallmeier
, “
Beam power nonlinearity: Twice the power, but not twice the neutrons?
,”
J. Phys.: Conf. Ser.
1021
(
1
),
012070
(
2018
).
15.
V. F.
Sears
, “
Neutron scattering lengths and cross sections
,”
Neutron news
3
,
26
37
(
1992
).
16.
V.
Sears
, “
Correction of neutron scattering lengths for electromagnetic interactions
,”
J. Neutron Res.
3
,
53
62
(
1996
).
17.
B.
Toby
and
T.
Egami
, “
Accuracy of pair distribution function analysis applied to crystalline and non-crystalline materials
,”
Acta Crystallogr., Sect. A: Found. Crystallogr.
48
,
336
346
(
1992
).
18.
P.
Tian
and
S.
Billinge
, “
Testing different methods for estimating uncertainties on rietveld refined parameters using srrietveld
,”
Z. Kristallogr.
226
,
898
904
(
2011
).
19.
H. M.
Rietveld
, “
A profile refinement method for nuclear and magnetic structures
,”
J. Appl. Crystallogr.
2
,
65
71
(
1969
).
20.
A. C.
Larson
and
R. B.
Von Dreele
, GSAS Report No. LAUR 86–748,
1994
.
21.
Y.
Iwasaki
,
A. G.
Kusne
, and
I.
Takeuchi
, “
Comparison of dissimilarity measures for cluster analysis of x-ray diffraction data from combinatorial libraries
,”
npj Comput. Mater.
3
,
4
(
2017
).
22.
T.
Johnston
, “
Automated discovery and analysis of temperature dependent structural changes
,” in
Smoky Mountains Computational Sciences and Engineering Conference
,
Gatlinburg, Tennessee
,
2017
.
23.
J.
Faber
, “
Calculation of full-pattern neutron time-of-flight (TOF) powder diffraction patterns
,”
Powder Diffr.
32
,
228
236
(
2017
).
24.
R.
de Gelder
,
R.
Wehrens
, and
J. A.
Hageman
, “
A generalized expression for the similarity of spectra: Application to powder diffraction pattern classification
,”
J. Comput. Chem.
22
,
273
289
(
2001
).
25.
M.
Ester
,
H.-P.
Kriegel
,
J.
Sander
,
X.
Xu
 et al, “
A density-based algorithm for discovering clusters in large spatial databases with noise
,” in
Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD ’96)
(
The AAAI Press, Menlo Park, California
,
1996
), Vol. 96, pp.
226
231
.
26.
R. J.
Campello
,
D.
Moulavi
,
A.
Zimek
, and
J.
Sander
, “
Hierarchical density estimates for data clustering, visualization, and outlier detection
,”
ACM Tans. Knowl. Discovery Data
10
,
5
(
2015
).
27.
See http://scikit-learn.org/stable/modules/clustering.html#dbscan for “Scikit learn dbscan user documentation,”
2018
.
28.
Z.
Bar-Joseph
,
D. K.
Gifford
, and
T. S.
Jaakkola
, “
Fast optimal leaf ordering for hierarchical clustering
,”
Bioinformatics
17
,
S22
S29
(
2001
).
29.
See https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html for “Scipy hierarchy linkage user documentation,”
2018
.
30.
J. R.
Neilson
and
T. M.
McQueen
, “
Representational analysis of extended disorder in atomistic ensembles derived from total scattering data
,”
J. Appl. Crystallogr.
48
,
1560
1572
(
2015
).
31.
M.
Senn
,
D.
Keen
,
T.
Lucas
,
J.
Hriljac
, and
A.
Goodwin
, “
Emergence of long-range order in BaTiO3 from local symmetry-breaking distortions
,”
Phys. Rev. Lett.
116
,
207602
(
2016
).
32.
D.
Olds
,
P. F.
Peterson
,
M. K.
Crawford
,
J. R.
Neilson
,
H.-W.
Wang
,
P. S.
Whitfield
, and
K.
Page
, “
Combinatorial appraisal of transition states for in situ pair distribution function analysis
,”
J. Appl. Crystallogr.
50
,
1744
(
2017
).
33.
H.
Nojiri
,
S.
Yoshii
,
M.
Yasui
,
K.
Okada
,
M.
Matsuda
,
J.-S.
Jung
,
T.
Kimura
,
L.
Santodonato
,
G.
Granroth
,
K.
Ross
 et al, “
Neutron laue diffraction study on the magnetic phase diagram of multiferroic MnWO4 under pulsed high magnetic fields
,”
Phys. Rev. Lett.
106
,
237202
(
2011
).
34.
K.
An
,
H. D.
Skorpenske
,
A. D.
Stoica
,
D.
Ma
,
X.-L.
Wang
, and
E.
Cakmak
, “
First in situ lattice strains measurements under load at vulcan
,”
Metall. Mater. Trans. A
42
,
95
99
(
2011
).