Previous velocity interferometers used at research laboratories for shock physics experiments measured target motion at a point or many points on a line on the target. Recently, a two-dimensional (2d) version (2d-velocity interferometer system for any reflector) has been demonstrated using a pair of ultrashort (3 ps) pulses for illumination, separated by 268 ps. We have discovered new abilities for this instrument, by treating the complex output image as a hologram. For data taken in an out of focus configuration, we can Fourier process to bring narrow features such as cracks into sharp focus, which are otherwise completely blurred. This solves a practical problem when using high numerical aperture optics having narrow depth of field to observe moving surface features such as cracks. Furthermore, theory predicts that the target appearance (position and reflectivity) at two separate moments in time are recorded by the main and conjugate images of the same hologram, and are partially separable during analysis for narrow features. Hence, for the cracks we bring into refocus, we can make a two-frame movie with a subnanosecond frame period. Longer and shorter frame periods are possible with different interferometer delays. Since the megapixel optical detectors we use have superior spatial resolution over electronic beam based framing cameras, this technology could be of great use in studying microscopic three-dimensional-behavior of targets at ultrafast times scales. Demonstrations on shocked silicon are shown.
I. INTRODUCTION
An important optical diagnostic for shock physics over many years has been a Velocity Interferometer System for Any Reflector (VISAR).1–4 This measures target motion to high precision using phase shifts of fringes produced by interfering light reflected from the target at two different times, slightly delayed. Until recently, this diagnostic has been limited to measuring motion at points or lines across a target using quasi-continuous wave illumination.1–3
Recently, our group introduced an ultrashort pulse two-dimensional (2d) imaging version of a VISAR.5–10 We have used it at the Rochester's Omega Laser system9 and at LLNL's Jupiter Laser system11 (Fig. 1) to measure 2d velocity and reflectivity maps of shocked Si, diamond, and other materials in a snapshot mode. Not only does this have an extra imaging dimension compared to streak camera VISARs, but the spatial resolution is also much higher. The 4000 × 4000 pixel CCD detector we use has many more resolution elements (∼2000 in each dimension, limited by the target lens blur) than does a typical electron beam imaging device such as a streak camera, which might have ∼50 resolution elements across its output phosphor screen.
The higher spatial resolution is needed to resolve fine cracks in the surface of shocked targets undergoing brittle fracture, and to simultaneously record four phase stepped versions of the image, which can be subsequently overlaid accurately to 1-pixel precision to obtain phase and magnitude information. Our apparatus also simultaneously uses a conventional line-imaging streak camera VISAR looking at a portion of the same target. In shots where the 2d-VISAR observes fine cracks, the line-VISAR does not resolve them – they manifest merely a change in average target reflectivity.
A consequence of using an ungated integrating detector such as a CCD array is that we must use it in a snapshot mode where the time resolution is provided by the illumination through an ultrashort laser pulse of 3 ps (actually a pair of pulses). This is because with current technology we cannot affordably purchase a movie camera that has sufficiently high spatial resolution together with picosecond shutter times. But in principle, a movie camera is what one desires for a fringe detector of an ultimate VISAR, to provide both time history and 2d spatial imaging.
A. Matched-delay interferometers in series
A pair of illumination pulses is used, rather than a single pulse. This is because the 3 ps laser pulse width is much shorter than the interferometer delay (268 ps), so that no interference would result from use of a single ultrashort pulse and a single unequal arm interferometer. To accurately measure the target change in position (velocity), the pulses must arrive at the target separated in time (t1, t2). Yet, for the reflected pulses to produce interference at the detector, they must overlap in time. This dilemma is solved by the use of two matched-delay interferometers in series, before and after the target, which produces four pulses from a single laser pulse (Figs. 2–4). The two inner pulses of the four interfere, creating 50% visibility fringes.
The rectangles in Fig. 2 represent glass etalons that delay light while maintaining apparent ray path angles,2,12 by creating a virtual image of the mirror behind it. This so-called field-widened or superimposing interferometer scheme allows high visibility fringes from diffusively scattering targets.
The pair of illumination pulses is created by a first (input or illumination) interferometer that precedes the target, having the same delay as the 2nd (output or detecting) interferometer that follows the target. Changes in target velocity create proportional shifts in the relative timing of the inner pulses that overlap at the detector, creating a corresponding fringe phase shift with the same velocity per fringe (VPF) proportionality as a conventional VISAR. This technique5–7 was originally demonstrated on incoherent white light, then applied to ultrashort laser pulses.9 Other low coherence illumination such as chirped pulses are feasible now for velocity interferometry. (The latter would allow use of wavelength as a new time recording parameter.)
B. Refocusing numerically post-experiment
We have discovered an interesting and useful improvement to the 2d-VISAR, which is a numerical post-processing (Fig. 5), easily implemented via Fourier transform, of the complex 2d-image data which normally outputs from the VISAR analysis. That is, we treat the data as a hologram and recover some three-dimensional (3d) information. If the original data were taken in an out of focus condition, we have demonstrated13 the ability to bring narrow features such as cracks back into focus (Fig. 6), post-measurement, when otherwise they are blurred. This is a very useful ability, since often it is difficult to precisely focus specular targets (such as clean silicon or diamond) and anticipate their motion prior to the moment of illumination, especially with the narrow depth of field of the fast lenses typically used to collect a large solid angle of light reflected from a target. This ability could also be useful for exploring the 3d debris region of a shocked textured target, or targets having 3d shape not residing in a single plane.
C. Two-frame movie effect
Furthermore, theory and numerical simulation predicts, and preliminary data support, the notion that the target appearance (position and reflectivity) at two separate moments in time are simultaneously recorded by the main and conjugate images of the same hologram, and are partially separable during analysis for narrow features, provided the target was recorded in a defocused configuration where the main and conjugate images are well separated in z-direction (longitudinal). Hence, for the cracks we bring into refocus, we can make a two-frame movie (Fig. 7) with a subnanosecond frame period. Longer and shorter frame periods are possible with use of different interferometer delays.
This two-frame movie behavior is both surprising and not surprising. It is not surprising because it is consistent with the use of a pulse pair rather than a single pulse to interrogate the sample. Yet, it is surprising because a conventional VISAR interferometer theory for the recorded fringe intensity involves an electric field cross term
But that term, appropriate for theory of a single point or line-VISAR, does not account for the effect of 2d-imaging in a defocused configuration. Theory and numerical simulations show that for the 2d-VISAR, the main and conjugate holographic images approximately embody the appearance of the target at two separate times t1 and t2 = t1 + τa, for narrow features, where τa is the delay of the first interferometer.
D. A kind of holography
We note that a 2d-VISAR is a kind of holography, since in both cases two wavefronts at optical frequency are interfered (Fig. 4) and the so-produced fringes at zero or relatively low frequency are recorded across a 2d image. However, with the VISAR both of two wavefronts reflect off the target (and during that interval the target changes its Z position slightly due to velocity, and/or its reflectivity due to shock loading). Whereas in a conventional holography one of the two wavefronts, called the reference wavefront, does not reflect off the target and is ideally smooth and uniform, so that features in the fringes represent just the wavefront reflected from the target.
In our holography, the output is kind of a double exposure, with the main and conjugate images are along the same optical path, displaced by the shot-time defocus amount (Zdf). For narrow features which blur rapidly with focus, the narrow features of one image can be distinguished from the blurry defocused version of the same feature in the other image. Hence, the larger Zdf, the larger the features to be disentangled from the double exposure (we use Zdf = 0.15 in our model). If the target was measured in a focused configuration (the intention of the experimenters taking our data), then Zdf = 0, the main and conjugate images superimpose, and the time-isolating (two-frame movie) effect does not occur. The shocked Si data we show here are just a few instances where it was accidentally defocused.
E. Comparison to prior holography
Holography has been used previously to measure ejecta from shocked surfaces,14 and a shock front.15 These used a conventional two beam (reference and object) arrangement to create fringes on the detector. Work of Greenfield et al.16 at LANL also used pulsed illumination to freeze motion of a shock sample observed in 2d by an interferometer. However, their system does not record simultaneous phase quadrature, as our system does. Consequently, their spatial resolution is significantly less because they (essentially) need to use adjoining pixels to provide the phase quadrature.
Recent ultrashort pulse digital holography work17 at Kyoto Institute of Technology differs from our topology by using a single illumination pulse where we use a pair, and having the target internal to the interferometer instead of external as in our technique. With a single illumination pulse they are measuring a single target image and not a change in position over significant time. Thus, they cannot measure the target velocity to the same precision that we can with a double pulse.
Since our target is external to the interferometers, our target can be a safe distance away from the operators and equipment, and the target position or surface texture does not change the interferometer alignment. In our configuration, homodyning is performed so it can work with complicated wavefronts from diffusively scattering targets.
II. APPARATUS
Figure 1 shows the target arrangement at the 2d-VISAR at LLNL's Jupiter Laser facility to study shocked Si.11 A 6 ns 532 nm Janus Laser pulse hitting 10 μm of CH creates a drive pulse buffered by 50 μm of Al. The drive pulse has a square profile ∼500 μm wide. The 2d and 1d VISAR optics images a ∼1 mm region on the back of the Si. The line-VISAR simultaneously measures the back of the target and provides a time history, with a spatial resolution of 30-50 μm along a 1 mm line of 15 μm width. The moment of the snapshot 2d-VISAR exposure is recorded by a fiducial. The 2d-VISAR probe beam is a 3 ps 1 mJ pulse of 400 nm doubled light from a Ti-sapphire laser. The target back is imaged through the detecting interferometer of the VISAR system, having a 268 ps delay between its arms. To create a quadrature phase recording, the image is recorded four times simultaneously in four quadrants of same detector (Fig. 8) with 1/4 wavelength delay shift between quadrants.9,10
III. PUSH-PULL DATA ANALYSIS
The data analysis technique10 for processing the four quadrants into velocity data uses push-pull math, similar to a conventional single point push-pull VISAR,2 but for each pixel of the image. On a 4000 × 4000 pixel CCD detector we record four 2000 × 2000 quadrant images of the target. The quadrants are labeled S0, S90, S180, S270 and ideally have 90° interferometer phase stepping relationship. A simple astigmatic adjustment corrects non-ideal phase relationships.10 For each pixel (x, y), we form two 2d outputs, nonfringing intensity (I), and complex fringing (W)
The complex fringing output is further expressed in polar coordinates and yields phase (θ) and magnitude (Mag) outputs
The magnitude |W| is similar to the intensity I but without detector and incoherent light offsets. (Comparison of the two is a useful check of data validity.) The refocusing operates on W, not I, and hence the |W(x, y)| and θ(x, y) are the two useful refocused outputs.
Conversion from phase θ(x, y) to target velocity is by multiplication by the velocity per fringe constant (VPF), set by the average interferometer delay τ as described by the equation18 for a conventional VISAR, which for a windowless configuration and τ = 268 ps is VPF = λ/(2τ) = 705 m/s per fringe. This is the distance (in units of λ) traveled during an interval τ, with a factor of 2 for the roundtrip since light reflects normally off target.
IV. NUMERICAL REFOCUSING
Figure 9 is example defocused data, a nonfringing image of windowless shocked Si (shot 020910-04). The many worm-like features are cracks, slightly out of focus. The defocusing was accidental – focusing on clean specular targets is difficult.
The wormlike features can be brought into narrow crack appearance via a numerical refocusing operation (Fig. 5): for a 800 × 800 pixel subset of the image a fast Fourier transforms (FFT) brings W(x, y) in pixel space from the focal plane to the Fourier plane W(u, v), where u, v are spatial frequencies and range from −400 to 400, and the edges correspond to Nyquist frequencies of ±0.5 pixel−1. For the nearly collimated light upstream of the camera lens, the wavefront at pupil plane is approximately the same as at the Fourier plane. A paraboloidal phase shift Φ(u, v) in units of cycles
and where C = 20 is an arbitrary constant, is applied to the phase of W(u, v) to simulate the addition of a thin lens to shift the focal position proportional to ΔZ. The inverse Fourier operation is then performed, and we use outputs |W′| for an ordinary nonfringing image and phase θ(x, y) for velocity.
We designate the Π symbol to represent the propagation (diffraction or refocusing) process which finds the new W relocated by ΔZ
where FFT is the fast Fourier transform and iFFT its inverse.
Figure 6 shows example numerical focusing on data of Fig. 9, zooming on crack features “J” and “L” shown in yellow and green boxes of Fig. 9. The top panels (a) and (c) are as-measured data recorded out of focus. Note the diffraction rings in (a). The true shapes of these features are obscured by the defocused condition.
The bottom panels (b) and (d) are after numerical refocusing using ΔZ = −0.26. The rings of (a) have come together to produce a narrow crack of width about 4 pixels or 2 μm. Note the interesting trigonal feature in (d) which manifests the symmetry of Si at [111] orientation. Hence, refocusing has revealed interesting shapes of features otherwise obscured, and it works for both the reflectivity (magnitude of W) and velocity maps (phase of W).
One knows when focus has been achieved because many features distributed around the whole image sharpen simultaneously. This is hard to convey in a single still image. To aid in finding focus, we produce movies with the ΔZ varying with the frame number; then we can quickly vary the focus as if it was an analog lens being moved.
The broad features, in magnitude or phase, are not significantly changed by the refocusing operation, similar to analog refocusing. Note that the backgrounds of (c) and (d) are similar for broad features. Thus, velocity maps previously calculated using out of focus data are still valid except for the very smallest scales (few micrometers).
Figure 10 compares phase and magnitude refocused results for spot J and K. The phase maps show less fluctuations than the magnitude maps, since the former is a difference in wavefront phases (θ1 − θ2) for defocused features and so systematic irregularities tend to cancel out. Whereas the magnitude maps are a product of the two wavefront reflectances r1r2 for defocused features, so any irregularities would not cancel out. Figure 11 shows that many of the fluctuations seen in the magnitude in the shot exposure are also seen in a reference exposure taken prior to the shot and refocused to the same position as the shot exposure, and therefore are not shock related.
Figure 12 shows another shot (020910-03) on silicon on same day also taken accidentally in a defocused condition. The defocus amount here is smaller, 0.08 instead of 0.26.
A. Blur from target lens
Figure 13 is a lineout across a crack, showing that the features we observe after numerical refocusing may be as small as 3 or 4 pixels, at 0.53 μm per pixel, thus 1.5–2 μm. Some of the cracks are quite dark, having nearly zero magnitude minimum. This is consistent with (not smaller than) the simulated diffraction limited blur from our f/3 target lens as shown in Fig. 14. A 1-pixel crack creates a partially deep minimum (unlike what we see in data), but a 2 pixel or wider crack creates a much darker minimum more like that in the data.
That our f/3 target lens has about half the numerical aperture of the f/1.3 boundaries of the Fourier/pupil plane is consistent with the Fourier transform of the W(x, y) data shown in Fig. 15. Note that the dark area surrounding the f/3 region has very low noise, demonstrating that our instrument has very good dynamic range, and that in the future a larger numerical aperture lens such as f/1.4 could be used effectively to improve the spatial resolution 2× further, to ∼1 pixel or 0.5 μm.
The boundary of the Fourier (∼pupil) plane is assigned a value of f# = 0.53/λ = 1.33 using the grating equation applied to a Nyquist spatial frequency in the focal plane. That this Fourier/pupil plane scaling is correct is confirmed by observing in our refocusing model that a solid circular disk at the focal plane produces an Airy ring pattern at the Fourier plane having the correct angular size.
B. Physical length units of refocus parameter ΔZ
In our refocusing mathematics, we use a dimensionless unit for ΔZ that scales the amount of parabolic delay Φ(u, v) added to W(u, v) to delay it, to change the curvature of the wavefront so that the focus is repositioned. We calculate a corresponding physical length for ΔZ, based on the angle the delay Φ(400, 0) makes at the edge of the Fourier/pupil plane and small angle approximations to the trigonometry.
The size of gap between a circle (which approximates the parabolic wavefront and whose center is the focus) and an inscribed triangle having its apex at the circle center is the edge value of parabolic shape Φ(400, 0). One finds that the ratio between ΔZ and Φ is
V. THEORY
A. Overview
Given knowledge of the optical electric field E at one location, one can in principle use full-up diffraction theory (such as Chap. 8 in Born and Wolf19) to calculate (propagate) the E anywhere else along the path, although the mathematics may be complicated when expressed analytically. The propagation, which we denote as E′(x, y) = ΠΔZ(E(x, y)) with refocus parameter ΔZ, also applies to the sum of two pulses – the pulse pair is additive in electric field. This is relevant since holography operates on the interference between two wavefronts.
We simplify diffraction theory and just apply a parabolic delay to the wavefronts at the Fourier/pupil plane to refocus by small amounts. The Fourier transform of E(x, y) in the focal plane yields E(u, v) in the Fourier plane, which is one focal length behind the pupil plane. Since we are using nearly collimated rays and weak amounts of wavefront curvature, we approximate the Fourier plane to also describe the pupil plane. Hence, ΔΦ ≈ ΔΦ′ in Fig. 5(a). We have for the refocused field
and Φ(u, v) given by Eq. (4).
Each detector quadrant records an intensity Sn(x, y) related to the electric fields E1 and E2
Due to the phase stepping in Eq. (2) for W, all the terms cancel except
We assume a small signal |D| < 1 so that we neglect the 2nd order term. This is reasonable since the one or both of the components is usually defocused (unless Zdf = 0), and defocusing spatially dilutes the amplitude of D.
Equation (8) has the additive situation, rather than multiplicative, that we are seeking to explain the time-isolation effect. The complex conjugate on
B. Theory: In-focus case
Consider the conventional case where the target is well-focused on the camera detector. The wavefront arriving at the detector is
We are not concerned with the details of ε0(t) which describes the electric field at optical frequencies, since the detector is integrating. The rn is a reflectivity coefficient. The target surface position in z (along optical path) is expressed by the phase θ so that a change in z of a wavelength λ creates one cycle of phase change.
The other pulse is delayed by the first interferometer so it interrogates the sample at two instances separated by τa
The 2nd interferometer reverses the delay shift, by amount τb to (τa − τb), and we assume the residual offset is smaller than the pulsewidth (3 ps) so that we have good fringe visibility (∼50%). This is easy to achieve. We can just assume τa = τb = τ and that target velocity over interval τ creates a change in z that manifests in a phase shift in the measured fringe. The δτ is the ∼λ/4 quadrature stepping. Expressing this as a phase change ϕn = δτn/λ,
The intensity of a detector quadrant is
We ignore the first two terms since they contribute constant intensity independent of the phase stepping, and thus will cancel in the push-pull Eq. (2) for W. Second, the choice of quadrature phase steps create coefficients
where ⟨|ε0(t)|2⟩ accounts for the integrating action of the detector.
So the reflectivity perceived by the apparatus is r1(x, y)r2(x, y). These are entangled together, not able to isolate r1 from r2. Similarly, the target position measured by the W fringe phase is only sensed through a change (θ1 − θ2). These are entangled and the separate target surface profiles are not isolated.
C. Theory: Defocused case
The numerical simulation shows that the time-isolating effect does not work for the targets having white features against zero background, but works for a bright field configuration, which could be white or black features against a gray background. This suggests re-expressing the defocused wavefront as a combination of a background and a deviation signal E′ → (1 + D) so that the as-measured defocused wavefronts are
(where ΔZ = Zdf), so that the diffracted or propagated form of
Similarly for the other pulse, except there is also the phase stepping phasor that we keep together with the ε0(t) term, not with the new 1 + D term
Then when we form W′, which is the
Because we are defocused, the diffracted energy is spread over a wider area and thus diminishes in amplitude, so |D| < 1 and thus we can neglect the 2nd order term
which shows additive rather than multiplicative behavior, and thus shows how the time-isolating behavior can occur (Figs. 16 and 17).
D. Theory: Refocused post-measurement case
At this point, we have described the measured complex data (W′) in its as-recorded configuration, which is assumed optically defocused by ΔZ = Zdf. Now we will describe the effect of refocusing the data to numerically undo the optical defocusing.
We apply the numerical refocusing engine Π to the data W′ with an opposite amount of refocus amount ΔZ = −Zdf to produce a refocused result
Note that the refocusing function Π ignores the 1 or any constant, since a constant transforms to the (u, v) = 0 position where there is no phase adjustment, Φ(0, 0) = 0. The (1 + D1) term refocuses back to its focused state, while the other
This is because of the complex conjugate, which flips the phase polarity in the pupil plane, causes the added parabolic delay Φ(u, v) to have the opposite effect on wavefront and increase the amount of parabolic component instead of cancelling it. Since it becomes doubly defocused, it spreads out even more
and for the opposite holographic image refocused at the opposite polarity ΔZ = +Zdf
We call
Each reconstructed image W′′ consists of two components, and in-focus one and a doubly defocused one. The in-focus component describes the single wavefront E1 (target at t1) plus a doubly blurred component of E2 (the target at t2), or vice versa. Because the doubly blurred component are rings that tend to be weaker and expanded beyond the immediate feature, they form a blurred background to the feature and can be distinguished from the feature, at least for narrow features smaller than the ring diameter. The larger the amount of shot time optical defocus Zdf, the larger and weaker are the doubly defocused rings, which facilitates the isolation of features from the ringing background.
VI. DISCUSSION
A. Support of time-isolation by simulation
Figure 18 shows a numerical simulation demonstrating the time isolation, for a simulated amount of shot time defocus of Zdf = 0.15, which is of the same order as the amount defocusing observed in data (0.26 and 0.08, in the two cases shown in Figs. 12 and 11). The hypothetical target has letter “1” at t1 and letter “2” at t2, both as a 100% dark reflectivity feature (center image), and as a 0.2 cycle phase (velocity) feature (slightly to upper left of center). Panel (a) shows the simulated as-measured appearance, which are all rings. Panels (b) and (c) show the simulated recovered images, which are independent, at the two positions of refocus ΔZ = ±0.15. Note that the doubly defocused background rings are faint and broad enough diameter to easily distinguish the “1” or “2” features from each others background. These reflectivity features were 100% deep in the target model (0 intensity for dark pixels, 1 intensity for white pixels), i.e., not a small signal. Hence, the small signal condition is apparently not required to produce the time isolation effect – but it is convenient for simplifying the algebra.
The time-isolating or “two-frame” effect is most noticeable for the narrow features, and is not present for broad features. This is because broad features do not change significantly under double defocus. The larger the Zdf, the larger the scale of features which can be time isolated.
Figure 19(b) shows a simulated reflectivity feature on the target presents best in the magnitude map output, and a simulated velocity (z position or fringe phase) feature presents best in the phase map output.
Figure 19(a) shows that this two-frame movie ability is due interaction of the two inner pulses (of the four) through phase stepping, not the outer two pulses, since if the simulation is modified to defeat the phase stepping the two frames show the same result not two independent results. This also shows that dust specks on common path optics, such as windows to the target chamber, which create diffraction rings on the detector do not produce this two-frame movie ability (since they are not changed by the phase stepping). However, dust specks on a mirror internal to an interferometer, and therefore appear in one pulse but not the other, can cause an artifact. These will also appear on the reference exposure prior to the shot, and thus can be distinguished from shock related behavior.
B. Support of time-isolation in data
Figure 7 (shot 020910-04) shows a change in crack appearance between the conjugate and main frames of shocked Si. Because the cracks appear more developed at ΔZ = −0.26 than ΔZ = +0.26, we have assigned the negative Z to represent the main image at later time t2, since it would be unnatural for cracks to reduce in severity with time under shock loading. The polarity of Z is set by polarity of Zdf, implying that the target was initially too far from target lens causing camera image to focus in front of the detector, so that retarding the phase with negative Φ(u, v), which brings camera to focus later (away from interferometer toward detector) brings image into refocus.
Another silicon shot (020910-03) of the same series also shows time development between conjugate and main images, and with the same Zdf polarity where the cracks appear more developed for ΔZ = −0.08 than for ΔZ = +0.08. That we see a change between the main and conjugate images is consistent with the two-frame effect. However, our understanding of the fracture dynamics of shocked Si is insufficient yet to make a detailed comparison between a theory and measurement.
VII. CONCLUDING REMARKS
A. Silicon crack growth velocity estimated
Comparing conjugate and main images of shock silicon Fig. 7, which theory ascribes to change in appearance of sample over interferometer delay of 0.27 ns, we estimate crack growth of order ∼15 μm, yielding a crack growth velocity of ∼60 μm per ns.
B. Effect of laser speckle
The dual interferometer topography allows incoherent or coherent illumination sources. That is, the particular phase of the illumination wavefront entering the dual interferometer system (“input pulse” of Fig. 2) is immaterial, because the interference in the second interferometer is between coherent echoes of that wavefront. However, the magnitude of the wavefront matters. For semi-coherent sources such as the 3 ps pulsed laser we use, laser speckle in the input illumination pulse manifests as a spatially varying intensity modulation, which is a multiplicative factor which propagates through the system and manifests in the refocused image as a modulation, albeit slightly different pattern because of focal change. This modulation can be a spatial noise source for identifying those features on the target having a similar size, but it does not change the phase of the fringe used for determining velocity shift. This is not a serious problem for our system, as we can spatially average our image data to reduce speckle intensity modulations.
However, if the illumination intensity goes to zero in a speckle, or because the target reflectivity is very absorptive or reflects the light at an angle that misses the collection lens, then the fringe phase in those regions is undefined or excessively noisy. We encounter this problem in some of our shocked targets.
In principle, an incoherent illumination pulse would have less speckle induced intensity modulation. While we have demonstrated production of fringes and velocimetry in our apparatus configuration with continuous white light illumination from an incandescent lamp,5 and a few microseconds long flash lamp,6 we have not yet experimented with, say, passage of an ultrashort pulse through a water cell to generate short pulse yet broad band incoherent illumination.
C. What about the uncertainty principle?
Although we measure both velocity and position of a target quasi-simultaneously, we do not violate the uncertainty principle because our pulse pair illumination creates some ambiguity at the time scale of the pulse separation.
This work is preliminary, but we are excited about the possibility of exploring the three-dimensional nature of rapidly moving and dynamically changing targets in a way not possible with a line or point VISAR.
ACKNOWLEDGMENTS
This work was performed under the auspices of the (U.S.) Department of Energy (DOE) by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. We thank the Jupiter Laser facility for their support.
APPENDIX: NUMERICAL MODEL
1. Simulated target
The numerical simulation operates on the same 800 × 800 array used for reprocessing data. A simulated target reflects light differently for the two pulses, designated 1 and 2.
Surface at time t1: E1(x, y) = 1 everywhere except for a dark reflectivity feature in center of image in shape of a “1” where E1 = 0, and a phase feature slightly to upper left of center where magnitude of E1 is unity but phase is shifted by 0.2 cycle, so that E1 = ei2π0.2.
For the surface at time t2, E2 is similar but a shape “2” is used instead, and every pixel is multiplied by a phase stepping shift of
2. Simulated measure-as-defocused
The net magnitude or intensity reflected off target is the sum of the intensity of the two outside pulses, plus the two inside pulses. The outside magnitude image Magoutside(x, y) is independent of phase stepping and is
where the prime indicates that we propagated the E field through or refocusing engine
where C = 20. We simulate the target measured in a defocused configuration by amount Zdf = +0.15 by passing the E's through ΠΔZ with ΔZ = Zdf.
The inner two pulses have a net magnitude
and total magnitude is Sn = Magouter + Maginner where the subscript n represents the four phase step choices.
3. Detection by simulated instrument
The total magnitude is evaluated four times with the four phase shifts to simulate being recorded simultaneously by the four quadrants. Then these are added and subtracted by the push-pull equation to make a simulated complex image
where the phase stepping subscript is in degrees. The magnitude |W(x, y)| is then displayed as the “as recorded” or ΔZ = 0 case shown in Fig. 18(a). As expected, this shows only out of focus rings from the four features (phase or reflectivity, time t1 or t2).
4. Simulated refocusing
Now the simulated data W are put through the refocusing engine to create refocused W′ (Fig. 18) using the opposite polarity refocusing ΔZ = −Zdf, to undo the defocusing imparted at recording time. We recover the so-called main, t = t2 image having “2” using negative ΔZ = −Zdf, and the conjugate image “1” if we use a positive ΔZ = −Zdf.
It is easy to mix up the polarities. For our particular math engine and how positive phase is defined (clockwise vs counterclockwise), this requires using a minus sign in front of the complex portion of W in Eq. (A6). Otherwise, the polarity of ΔZ needed to recover “2” is reversed. Of course, it is arbitrary which one is called “main” and which called “conjugate,” and arbitrary which instance is defined to be earlier, t1 or t2. But we do have a link to a physical phenomena in that the cracks of Figs. 9 and 12 seem more developed in one frame compared to another and it is logical to expect cracks to deepen their light absorption with the arrow of time.