We describe a method of analyzing gate profile data for ultrafast x-ray imagers that allows pixel-by-pixel determination of temporal sensitivity in the presence of substantial background oscillations. With this method, systematic timing errors in gate width and gate arrival time of up to 1 ns (in a 2 ns wide gate) can be removed. In-sensor variations in gate arrival and gate width are observed, with variations in each up to 0.5 ns. This method can be used to estimate the coarse timing of the sensor, even if errors up to several ns are present.

The ultrafast x-ray imagers (UXIs) designed and built by Sandia National Laboratory1 are rapidly replacing microchannel-plate (MCP)-based x-ray diagnostics in many applications at the National Ignition Facility (NIF) and other facilities where high-speed, high-intensity x-ray sources are observed. While MCPs are still faster (<100 ps integration time rather than 1–2 ns), UXIs are attractive because they can collect multiple frames to a single pixel and thus along a single line of sight. Moreover, the gated hybrid CMOS (hCMOS) technology is conceptually simple and does not require high voltage.

These imagers are characterized using many repetitions of a short-pulsed laser to assess “gate profiles.”2 The concept is shown in Fig. 1: we repeatedly trigger the laser and the UXI, varying the trigger time of the sensor relative to the laser until we have collected images from several ns before the sensor is active through several ns after. We use a 532 nm laser with a pulse <50 ps, and we adjust the sensor delay in 100 ps steps to fully characterize the 1–2 ns integration time of the sensor. Images are typically collected each 5–10 s, limited by the response time of the UXI and ancillary electronics rather than the 1 Hz laser frequency.

FIG. 1.

Schematic diagram of the gate-profile method. (Left) Sensor is triggered at multiple times relative to the incident laser. (Top right) Signal at each pixel is recorded and fit to a model of intensity vs time [Eq. (1)]. (Bottom right) The left and right halves of the sensor can be triggered independently. If the right half is delayed by the sensor reset time, some portion of the sensor is always active.

FIG. 1.

Schematic diagram of the gate-profile method. (Left) Sensor is triggered at multiple times relative to the incident laser. (Top right) Signal at each pixel is recorded and fit to a model of intensity vs time [Eq. (1)]. (Bottom right) The left and right halves of the sensor can be triggered independently. If the right half is delayed by the sensor reset time, some portion of the sensor is always active.

Close modal

We analyze these data to measure the integration time and reset delay of each frame by fitting the intensity vs time to a model, typically a double-sided sigmoid,

I=A11+exptt1τ111+exptt2τ2,
(1)

where I is the observed intensity, A is the peak intensity, t1 and t2 are the time of the rising and falling half-maxima, respectively, and τ1 and τ2 are the rise and fall times, respectively. (t1t2) is then the full-width-half-max, or the effective integration time (“gate width”), and (t1+t2)2 is the “gate center.” We do this analysis pixel-by-pixel, so we can also assess variations in timing across the sensor.3 In this equation, time is relative: pulsing the laser earlier is equivalent to triggering the sensor later.

UXIs are susceptible to significant temporal oscillations in background levels due to inductance effects in the bond wires that connect the sensor to its circuitry.4 These oscillations are evident in the gate profile data: Fig. 2 shows an example of gate profile data for a single frame of an Icarus-2 sensor collected in 2-2-interleaved mode. Each Icarus sensor consists of 512 × 1024 cubic silicon pixels, 25 μm on each side. In 2-2-interleaved mode, each pixel collects four frames, integrating each frame for 2 ns and then resetting for 2 ns between frames. The right half of the sensor (divided lengthwise) is delayed from the left half by two additional ns (see Fig. 1).

FIG. 2.

Gate profile of a single Icarus-2 frame for two different configurations. For each, a single image is shown in the upper-right, and intensity vs recorded time (see the text) is plotted for two pixels (labeled ○ and □). (Left) Sensor fully illuminated. (Right) Sensor mostly covered. When the laser arrives before the camera is active at ∼−27 ns, an oscillating signal is observed. When the laser arrives after the camera is active, no signal is observed.

FIG. 2.

Gate profile of a single Icarus-2 frame for two different configurations. For each, a single image is shown in the upper-right, and intensity vs recorded time (see the text) is plotted for two pixels (labeled ○ and □). (Left) Sensor fully illuminated. (Right) Sensor mostly covered. When the laser arrives before the camera is active at ∼−27 ns, an oscillating signal is observed. When the laser arrives after the camera is active, no signal is observed.

Close modal

A single sensor image is shown at the top panel of Fig. 2, and the intensity at two different pixels for that frame is plotted vs “recorded time,” the raw measurement of time between the laser’s trigger signal and the camera’s electronic monitor as recorded on the oscilloscope. Once data are analyzed and fit to Eq. (1), time is frequently referenced to the gate center.

The left panel of Fig. 2 shows a gate profile collected with the sensor (nearly) fully illuminated. At each pixel, what is recorded is the sum of the actual sensitivity, or gate, and the background oscillation due to any light that arrived earlier. The oscillations are similar to—but not exactly—a damped sinusoid, and the intensity cannot accurately be described by Eq. (1) alone.

These oscillations can be reduced if the total signal (and thus photocurrent) is minimized. One example is shown in the right panel of Fig. 2, where the total signal is reduced by covering most of the sensor, accepting laser light only in circular regions that comprise less than 15% of the its total area. In this case, oscillations are still present, but their intensity is reduced, and gate profile data where the sensor is illuminated more accurately reflect the actual temporal sensitivity of the sensor to light. More interestingly, the oscillations persist where the sensor is covered, and the recorded intensity is an oscillation entirely due to bond-wire inductance.

With a single gate profile measurement, we can accurately measure the temporal sensitivity only for a small fraction of the sensor, making it difficult to know if the entire sensor acts uniformly or if there is variation in the trigger or integration time. The key advance we describe in this paper involves using data from two gate profiles together instead of just one. With two profiles—one in which the active area is substantially blocked and the other in which the active area is substantially open—we can measure both the actual gating function and the background oscillations. We find that by using a pixel-by-pixel fitting strategy, we can extract the time that the camera is sensitive to light for (almost) every pixel.

An example of the analysis method is shown in Fig. 3. We subtract the oscillation-only gate profile when the sensor is masked from the gate profile and when the sensor is illuminated and the signal consists of oscillations and gating together. However, the oscillation intensity is reduced with the sensor partially masked, so we must scale the oscillations to the level they are in the full illumination data. In addition, we cannot subtract pixel intensities directly because trigger jitter results in recorded times that are not identical. Instead, we fit the gate profile data to a function in order to subtract interpolated values. After unsuccessfully trying several physics-based functional forms, e.g., a damped sinusoid, we settled on fitting both the oscillations alone and the oscillations-plus-gating function as a series of Gaussians of alternating sign. (The locations and directions of these Gaussians are shown as arrows in Fig. 3.)

FIG. 3.

Gate profile data (symbols) and fits (curve) for a single pixel plotted vs laser arrival time (relative to the time when the sensor is active). (Left) Sensor is fully illuminated (peak intensity—3000 counts). (Center) Sensor is mostly masked (peak intensity—500 counts). (Right) In order to subtract the oscillations measured in the masked gate profile from the fully illuminated gate profile, the masked fit is scaled (dashed curve) to match the illuminated signal at the first minimum before the gate (−4 ns). Arrows in the left and center panels indicate locations where Gaussians were fit to build up the oscillation function.

FIG. 3.

Gate profile data (symbols) and fits (curve) for a single pixel plotted vs laser arrival time (relative to the time when the sensor is active). (Left) Sensor is fully illuminated (peak intensity—3000 counts). (Center) Sensor is mostly masked (peak intensity—500 counts). (Right) In order to subtract the oscillations measured in the masked gate profile from the fully illuminated gate profile, the masked fit is scaled (dashed curve) to match the illuminated signal at the first minimum before the gate (−4 ns). Arrows in the left and center panels indicate locations where Gaussians were fit to build up the oscillation function.

Close modal

The result of this subtraction is shown in Fig. 4. The illuminated and masked gate profile data are shown in the upper portion of the graph along with the scaled intensity of the masked data fit function. When we subtract the oscillation determined from the masked gate profile, we find that the inferred sensitivity (in the lower portion of the graph) fits well to our model of a double-sided sigmoid (in this example with an FWHM of 1.8 ns, a rise time of 175 ps, and a fall time of 140 ps).

FIG. 4.

The gating function (purple circles) is deduced by subtracting the scaled intensity (dashed blue curve) of the masked oscillation (blue curve with black triangles) from the observed oscillating gate (red curve with black circles). The gate is fitted to Eq. (1) (solid purple curve), and the gate center is defined to be at 0 ns. Images at the labeled times (A–E) are shown in Fig. 5.

FIG. 4.

The gating function (purple circles) is deduced by subtracting the scaled intensity (dashed blue curve) of the masked oscillation (blue curve with black triangles) from the observed oscillating gate (red curve with black circles). The gate is fitted to Eq. (1) (solid purple curve), and the gate center is defined to be at 0 ns. Images at the labeled times (A–E) are shown in Fig. 5.

Close modal

Interestingly, the time that the sensor is active is later than the observed peak in intensity because the oscillations peak before the gating. To demonstrate this, we show a few images in Fig. 5, where the images labeled A–E refer to the times indicated in Fig. 4.

FIG. 5.

Individual frames of the fully illuminated gate profile in Fig. 4. Images labeled B, C, and D are uniformly illuminated by the laser. Intensity variations are due to spatially varying background oscillations. Images labeled A and E are not illuminated by the laser. All intensity is due to the oscillating background.

FIG. 5.

Individual frames of the fully illuminated gate profile in Fig. 4. Images labeled B, C, and D are uniformly illuminated by the laser. Intensity variations are due to spatially varying background oscillations. Images labeled A and E are not illuminated by the laser. All intensity is due to the oscillating background.

Close modal

At point A, a high signal level is observed, but no light is present, while at point D, there is high sensitivity to light, but the absolute signal is low because that sensitivity is superimposed on a negative oscillation. We can see this in the data itself because the speckle pattern in images B–D that indicates laser light was present but is missing in both images A, which had high absolute signal levels, and E, which had a nearly zero signal.

We fit the inferred gate (bottom part of Fig. 4) at each pixel to Eq. (1), and here, we show the results for a single Icarus-2 sensor in 2-2-interleaved mode. By doing our analysis pixel-by-pixel, we can measure variations across the sensor in the gate center arrival time t1+t22 (Fig. 6) and the gate width (t2t1) (Fig. 7). We can also plot the fitted gate profile for each pixel to compare the gate shapes and see variation in rise and fall times across the sensor (Fig. 8).

FIG. 6.

Gate center times for each pixel. Zero is defined as the gate center at the central pixel of the left hemi-sensor in frame 0. Inferred gate shapes for the central pixel in each hemi-sensor are shown in Fig. 8. Variation within each hemi-sensor (“skew”) is shown in Fig. 9.

FIG. 6.

Gate center times for each pixel. Zero is defined as the gate center at the central pixel of the left hemi-sensor in frame 0. Inferred gate shapes for the central pixel in each hemi-sensor are shown in Fig. 8. Variation within each hemi-sensor (“skew”) is shown in Fig. 9.

Close modal
FIG. 7.

Gate width (FWHM) at each pixel. Integration time can vary by a few hundred ps within each frame and by nearly 500 ps (25%) across all frames.

FIG. 7.

Gate width (FWHM) at each pixel. Integration time can vary by a few hundred ps within each frame and by nearly 500 ps (25%) across all frames.

Close modal
FIG. 8.

Inferred gate shapes (at a single pixel). The left hemi-sensor (solid curves) is triggered 2 ns earlier than the right hemi-sensor (dashed curves) in each frame. The times at the center of each gate (t1+t2)2 are shown in Fig. 6.

FIG. 8.

Inferred gate shapes (at a single pixel). The left hemi-sensor (solid curves) is triggered 2 ns earlier than the right hemi-sensor (dashed curves) in each frame. The times at the center of each gate (t1+t2)2 are shown in Fig. 6.

Close modal

One of the most informative results is an inference of timing skew, as shown in Fig. 9. We find that the camera is triggered earliest in the bottom left corner (marked a). The triggering starts from the outside on the left- and right-hand sides of the sensor and moves toward the center over a few hundred picoseconds (marked b). Finally, the gate arrives in the upper-center region of the sensor nearly 500 ps after it arrives in the lower left corner of the sensor (marked c), approximately consistent with previous estimates.5 

FIG. 9.

In-sensor timing skew. Gate center arrival time relative to the central pixel in each hemi-sensor. Markers a–d are described in the text.

FIG. 9.

In-sensor timing skew. Gate center arrival time relative to the central pixel in each hemi-sensor. Markers a–d are described in the text.

Close modal

We can only do this analysis for the pixels that were illuminated in one gate profile and covered in the other. For those pixels that were either illuminated or covered in both gate profiles, we cannot do this analysis, and we interpolate between fitted pixels (marked d).

It is interesting to note that while the circuit and pixels are the same for each of the four frames, the variation in gating across the sensor is not the same for each frame. The delay in gate arrival from the outside to the center of the sensor is consistent with the wiring of the integrated circuit that forms the base of the UXI. However, the variation from top to bottom and also frame to frame is not well understood as a consequence of the circuit design. Future work aims to develop a physically based model to quantify and predict timing skew.

These subtleties in timing may be important for many applications: to use intensities quantitatively, it is important to know when and how long each pixel is active.

One consequence of collecting gate profiles with the full sensor illuminated is that we can use the background oscillations to determine the coarse timing of the sensor, even if it is several ns later than the time that the sensor is hit by light.

In Fig. 10, we show an example of the installation of a single Icarus-1 camera at NIF. The experimental design, shown in the left panel of Fig. 10, was to trigger the camera so that the 2 ns x-ray source was on at the same time as the first frame of the camera. In the right panel, the data we collected for the two frames of the camera are shown. No evidence of incident x rays is present on either frame. Instead, only the non-uniform background is present. It was subsequently determined that there was an uncertainty in the facility-cable timing that caused this timing error.

FIG. 10.

Mistimed camera data at NIF. (Left) Experimental design to shine x rays at the same time as frame 1. (Right) Experimental data showed no evidence of x rays.

FIG. 10.

Mistimed camera data at NIF. (Left) Experimental design to shine x rays at the same time as frame 1. (Right) Experimental data showed no evidence of x rays.

Close modal

Fortunately, the non-uniformity of the background looked familiar from our analysis of the gate profiles: it resembled a single gate profile image. Consequently, we were able to determine that the x rays arrived ∼2 ns before frame 1. That is, the camera triggered 2 ns late. In Fig. 11, we show the gate profile data and a simulation (based on the fitted gate profile data) of a 1 ns source at the time indicated in black. The simulated “background” is qualitatively (also quantitatively) quite similar to the observation, in both frame 1, which was shortly after the x-ray source, and frame 2, which was triggered 4 ns later. We used this estimate to adjust the cable-delay timing model and acquired good quality data on the next shot.

FIG. 11.

Simulation of experimental data in Fig. 10. Fully illuminated gate profile results were summed to simulate a 1 ns long x-ray source that arrives 2 ns before frame 1 is active.

FIG. 11.

Simulation of experimental data in Fig. 10. Fully illuminated gate profile results were summed to simulate a 1 ns long x-ray source that arrives 2 ns before frame 1 is active.

Close modal

Although the 1 ns source used in the simulation is not identical to the 2 ns source in our experiment, it is significantly longer than the 50 ps laser pulses from which the gate profile fits were determined, and it was sufficiently close to reality to determine the coarse timing change that enabled data collection. In order to use this method to determine timing with precision, a more exact match to the incident x-ray history would be required. In addition, further work would need to be done to develop metrics for the shape and relative intensity of the background oscillations, rather than relying on qualitative pattern matching.

We have demonstrated a method for quantitatively describing early light-induced background oscillations in ultrafast x-ray imagers that can correct systematic errors in timing by up to 1 ns for a 2 ns sensor. With these background oscillations removed, in-sensor variations in absolute timing and integration time of up to 0.5 ns are measured.

While we presented this result in the context of accurately determining the time when a sensor is active, this work may be equally useful in experimental operations. The phenomena that lead to oscillating background, i.e., light that arrives in advance, are often present in laser-and-plasma physics environments. This is especially true for frames after the first one if the first frame is strongly illuminated. Moreover, many experimental configurations are incompatible with a strategy of minimizing the total photocurrent to minimize background oscillation.

For example, we are using UXIs in an experimental platform for x-ray diffraction of laser-compressed materials. In this use-case, the sensors cannot be substantially covered to reduce total photocurrent. Rather, sensors must be left open as diffraction may arrive in any location. In addition, we need to quantify the intensity of diffraction reflections, so any oscillating background needs to be effectively subtracted. This challenge is the subject of future work.

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract Nos. DE-AC52-07NA27344 and LLNL-CONF-818080.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
L. D.
Claus
,
M.
Sanchez
,
G. A.
Robertson
,
J. L.
Porter
,
L.
Fang
,
D.
Trotter
,
T.
England
,
A. C.
Carpenter
,
M. S.
Dayton
, and
P.
Bhogilal
,
Proc. SPIE
9966
,
99660F
(
2017
).
2.
M.
Dayton
,
A.
Carpenter
,
H.
Chen
,
N.
Palmer
,
P.
Datte
,
P.
Bell
,
M.
Sanchez
,
L.
Claus
,
G.
Robertson
, and
J.
Porter
,
Proc. SPIE
9966
,
996602
(
2016
).
3.
L. R.
Benedetti
,
J. P.
Holder
,
M.
Perkins
,
C. G.
Brown
,
C. S.
Anderson
,
F. V.
Allen
,
R. B.
Petre
,
D.
Hargrove
,
S. M.
Glenn
,
N.
Simanovskaia
,
D. K.
Bradley
, and
P.
Bell
,
Rev. Sci. Instrum.
87
,
023511
(
2016
).
4.
E. R.
Hurd
,
A. C.
Carpenter
,
M. S.
Dayton
,
C. E.
Durand
,
L. D.
Claus
,
K.
Engelhorn
,
M. O.
Sanchez
, and
S. R.
Nagel
,
Proc. SPIE
10763
,
107630L
(
2018
).
5.
E. R.
Hurd
,
T.
Tate
,
M. S.
Dayton
,
L. D.
Claus
,
C. E.
Durand
,
M.
Johnston
,
J.-M. G. D.
Nicola
,
A. C.
Carpenter
, and
M. O.
Sanchez
,
Proc. SPIE
11114
,
1111413
(
2019
).