The recently developed Light Field Moment Imaging (LMI) is adopted to show the stereoscopic structure of the sample studied in Coherent Diffractive Imaging (CDI), where 3D image were always generated with complicated experimental procedure such as the rotation of the sample and time-consuming computation. The animation of large view angle can be generated with LMI very quickly, and the 3D structure of sample can be shown vividly. This method can find many applications for the coherent diffraction imaging with x-ray and electron beams, where a glimpse of the hierarchical structure required and the quick and simple 3D view of object is sufficient. The feasibility of this method is demonstrated theoretically and experimentally with a recently developed CDI method called Ptychographic Iterative Engine.

Coherent diffraction imaging (CDI) is one of the promising methods to retrieve phase information from the diffraction intensity and has the ability to obtain the theoretical λ/2 spatial resolution with simple setup. One of the earliest strategies of CDI is the Gerchberg-Saxton (G-S) algorithm1 proposed in 1972 where the material structure information could be reconstructed iteratively while constrains in the object domain and Fourier domain could be satisfied. Improved algorithm using finite support and a non-negative distribution constraint based on G-S method was proposed by Fienup2–4 in the subsequent years, such as error-reduction algorithm and Input-output algorithm. However, when it comes to the objects with complex distribution or soft edges, the iteration process may be confronted with the problem of low convergence speed or even stagnation. Despite the successful applications of Fienup’s methods in various fields, more rigorous constraints should be developed to overcome the stagnation potential and guarantee the fast convergence speed for complex objects. One of the successful strategies to overcome these problems is the Ptychographical Iterative Engine (PIE),5–7 which uses a set of diffraction patterns with the translation diversity in the domain perpendicular to optical axis. The prominent advantages of PIE in comparison with other CDI methods are its extendable field of view and the capability of better convergence speed while the ability to obtain theoretical wavelength-limited resolution is also reserved. All these advantages have attracted various researches of the PIE technique with the visible light and short wavelength radiation (hard X-rays and electron beam) in recent years.

When a standard PIE algorithm is applied into the phase reconstruction, the sample studied is assumed to be thin enough to be regarded as a 2D object. This approximation simplifies the calculation process of phase retrieval, but the 3D depth information contained in the phase distribution (considering that the complex valued transmission function of the object is reconstructed)-cannot be visualized in the two-dimensional (2D) image. Nevertheless, this depth information is very interesting and required in many fields. The PIE algorithm can handle 3D imaging, and the generation of quantitative 3D maps using PIE based X-ray tomography and Fresnel Coherent Diffraction Imaging (FCDI) tomography has been implemented in 2010 and 2012 respectively at nanoscale.8,9 The implementations of PIE algorithm in quantitative 3D imaging with X-ray have demonstrated its capacity in 3D image reconstruction. On the other hand, a multi-slice PIE algorithm was proposed to divide the specimen into slices or layers and image their phase and modulus simultaneously.10 This multi-slice PIE algorithm does not need to rotate the specimen or scan through optical axis and can be well implemented for thick specimens even with multiple scattering. Like other tomography techniques, PIE based X-ray tomography and FCDI tomography suffer from time-consuming data recording process, and the data acquisition time required can be longer than 36h. Accordingly the requirement on the stability of the equipment including the illumination source is extremely high. Multi-slice PIE is an alternative to the traditional tomography to get 3D structural information of sample, which slices the object into layers and computes their phase and modulus simultaneously. Compared to PIE based X-ray tomography and FCDI tomography, the data acquisition time needed is remarkably reduced to about ten minutes, so the requirement on the environment and equipment stability is quite low, however its reconstruction is very time consuming. For example, to slice the object studied into three layers and reconstruct them with PIE, the computational time needed can be more than three days with common desktop. Considering that the quick and simple 3D view of an object is sufficient in the fields where a glimpse of the hierarchical structure is required, obtaining full-view of 3D structure with large computation time is not always necessary. Since the current tomographic methods including the multi-slice PIE are not ideal for quick or online observation. A quick method to show the 3D structure of the sample observed is proposed in this paper by using the technique of Light field Moment Imaging (LMI).11,12

Light field Moment Imaging is proposed by K.B.Crozier and Antony Orth from Harvard University to realize the impression of 3D perspective views merely by common image sensor without using extra hardware.13–15 It gets the light field information not by direct measurement but by deducing the angle of the incident light at each pixel of the image sensor. In this study, we demonstrate how to applying the LMI algorithm to achieve 3D perspective views for the PIE technique. This method takes the advantages of PIE algorithm’s strengths of large visual fields and fast convergence with high resolution while producing 3D perspective shifted images in a simple and quick way. The data needed in this method is derived from the standard PIE experiment, which is only a small ratio of that used in the PIE based X-ray tomography and FCDI tomography, where the sample should be rotated over a wide range during the data recording. And in comparison with the multi-slice PIE algorithm, only about ten minutes is required to finish all the computation. So the proposed method possesses the advantages of both traditional tomography and multi-slice PIE and thus can find lots of applications in microscopy with the visible light, X-ray and electron beam. The feasibility of this proposed method is demonstrated with numerical simulations and visible light experiments.

PIE algorithm has developed rapidly and its details can be found in many literatures,5–7 so only a brief introduction is outlined here. The experimental setup of PIE is schematically shown in Figure 1, where an extended sample with transmission function O(r) is fixed on a translation stage at the downstream of a pinhole, and the illumination on its front surface has a function of P(r), where r is the coordinate of the object plane. Diffraction patterns are recorded by a CCD camera in the far field of the sample. Similar to other Coherent Diffraction Imaging techniques,4,16,17 the PIE algorithm propagates the radiation field forward and backward between the object and the recording planes iteratively to reconstruct the object transmission function. The retrieve algorithm5–7 starts with a random guess on the object transmission function Oi,n(r), where the subscript i, n represent an initial guessed object function at the nth iteration of the algorithm. Then the iteration goes on with the following procedures.

  • The guessed exit wave function, when P(r) is shifted by R relative to the object position in the nth iteration, is calculated from the known probe and the current guessed object function Oi,n(r):
    Ψ i , n ( r , R ) = O i , n ( r ) P ( r R )
    (1)
  • In diffraction space, amplitude and phase of the resulting exit wave are calculated from the Fourier transform of the guessed exit wave function, where k is the reciprocal coordinate of the object space coordinate r:
    Ψ ̃ i , n ( k , R ) = F F T [ Ψ i , n ( r , R ) ] = A i , n exp [ i θ i , n ( k , R ) ]
    (2)
  • The square root of the recorded intensity values Ii(k, R) are applied to revise the amplitude in diffraction space, and the phase remains unchanged:
    Ψ ̃ i , n ( k , R ) = I i ( k , R ) exp [ i θ i , n ( k , R ) ]
    (3)
  • A new and improved estimated exit wave function is obtained by inverse Fourier transform propagated back to object space:
    Ψ new , n ( r , R ) = F F T 1 [ Ψ ̃ i , n ( k , R ) ]
    (4)
  • The n + 1 guessed object transmission function is updated in the area illuminated by the current illumination, using the update function below, where the constant α is appropriately chosen to reduce the noise:
    O i , n + 1 ( r ) = O i , n ( r ) + | P ( r ) | | P max ( r ) | P * ( r R ) ( | P ( r R ) | 2 + α ) [ Ψ new , n ( r , R ) Ψ i , n ( r , R ) ]
    (5)
  • Move the sample to the next adjacent position, and the illumination area partially overlaps with the previous position, considered as an essential procedure to the algorithm’s success.

With the repetition of (a)-(f), the reconstructed object transmission function On(r) will correctly be retrieved. One innovation of the PIE algorithm is that the illuminated sample area partially overlaps with each other in adjacent positions to retrieve the complex object transmission function. This allows the PIE method to scan a wide field of view, while still having a high convergence speed.

FIG. 1.

Schematic of PIE method. A series of diffraction patterns used for PIE phase retrieval algorithm are recorded by a CCD camera. The CCD camera is put in the far field of the sample.

FIG. 1.

Schematic of PIE method. A series of diffraction patterns used for PIE phase retrieval algorithm are recorded by a CCD camera. The CCD camera is put in the far field of the sample.

Close modal

Light field moment imaging (LMI) technique uses the light field information by inferring the angular moments of light rays collected by an image sensor to reconstruct perspective shifted views of the original object.11 

The schematic diagram of the light field parameterization L ¯ ( x , y , z , tan θ X , tan θ Y ) can be shown as Figure 2, light propagating in the direction projected onto the xz and yz plane specified by angles θX and θY. Light rays propagate at a certain angle when imaged on an image sensor, and those images contain important light field information. The two images, I1(x, y; z1) and I2(x, y; z2), are captured at different focal planes with a tiny disparity. Using mathematic process of LMI, the direction of light field could be calculated by the Poisson equation:

I ( x , y ; z ) z = 2 U ( x , y ; z )
(6)
FIG. 2.

Schematic diagram of the light field parameterization.

FIG. 2.

Schematic diagram of the light field parameterization.

Close modal

U(x, y; z) is the scalar potential and decided by U ( x , y ; z ) = I ( x , y ; z ) M ( x , y ; z ) and M = [ s ( x , y ; z ) , t ( x , y ; z ) ] is a vector, which contains the normalized first angular moments of the light field L and has great influence on the perspective shifted effect. I(x, y; z) is defined by [I1(x, y; z1) + I2(x, y; z2)]/2. With the two input images of slightly different focus distances to approximate the left side of Eq. (6), the Poisson equation can be solved in Fourier space by applying a filter H:

H ( f x , f y ) = [ 4 π 2 ( f x 2 + f y 2 ) ] 1 for ( f x 2 + f y 2 ) 0 H ( f x , f y ) = 1 for ( f x 2 + f y 2 ) = 0
(7)
U = F F T 1 [ H × F F T { ( I 1 I 2 ) / Δ z } ]
(8)

FFT and FFT−1 represent the fast Fourier and inverse Fourier transform, respectively; fx and fy are the spatial frequencies and Δz = z2z1 is the defocus distance between the two input images. Then the moment vector field M can be calculated through computation of the scalar potential U and intensity I. Furthermore, the angular distribution of light rays L can be assumed to be Gaussian with an average value [s(x, y), t(x, y)]:

L ( x , y , z , tan θ X , tan θ Y ) = I 2 ( x , y ) × exp { [ tan θ X s ( x , y ) ] 2 / σ 2 [ tan θ Y t ( x , y ) ] 2 / σ 2 }
(9)

Here, z1 is dropped for convenience and setting σ2 = NA2, the value σ brings a good perspective-shifting effect. Through computational processing and by coupling these two images together into an animation, the perspective shifted views of a scene could be achieved without the need of expensive extra hardware.

In conventional optical 3D imaging and coherent diffraction imaging, it’s difficult to obtain a perspective image with a large visual field unless expensive extra hardware is used, but the emergence of LMI provides an easy way for these fields to generate perspective shifted views.

The key step in the LMI experiments is to record two images of the specimen at two different planes, where one image is in focus and another one is slightly out of focus. The interval Δz between these two planes should be tiny enough to make the approximation (I2-I1)/Δz ≈ ∂I/∂z valid. In theory, the smaller the Δz is, the more accurate the perspective shifted views will be. Due to the limited sensitivity of the detector and the existence of noises inherent in experiments, the Δz can’t be very small. For PIE imaging, the reconstructed complex amplitude I 1 exp ( i φ 1 ) includes both the intensity I1 and the phase φ1, and thus the second image I2, can be calculated with the Fresnel diffraction formular with I 1 exp ( i φ 1 ) , thus the Δz is not limited by the detector sensitivity. In other words, no extra data are needed for LMI to generate perspective shifted views for PIE, which is the advantage of the proposed method. Fig. 3 shows the data flow chart of the proposed method, where the complex image I 1 exp ( i φ 1 ) of the object is reconstructed with PIE, and then the intensity I2 of a slightly off-focus image is computed with the Fresnel formula to get the angular moment needed for LMI to generate the images of different view angle.

FIG. 3.

The flow chart of the proposed method.

FIG. 3.

The flow chart of the proposed method.

Close modal

To demonstrate the feasibility, a numerical simulation on a two-slice object is used to get perspective shifted views. In the simulation, the sample to be imaged is assumed to be a two-layer object. The first layer is a part of a resolution testing target shown in Figure 4(a), and the second layer is a stem slice of dicotyledon in Figure 4(b). The interval between these two layers is 1280 um. A series of diffraction patterns are recorded while this sample is scanned with a thin laser beam of 632.8 nm in wavelength, then the image of the sample is reconstructed with the standard PIE algorithm. Figure 4(c) is the reconstructed intensity image, where the insert image is one of the recorded diffraction patterns. By numerically propagating this image for a short distance of 50 um, a slightly off-focus image shown in Figure 4(d) is generated. With these two intensity images, the LMI method was used to reconstruct the light field and get the perspective shifted animation with the Eq. (6), Eq. (7), and Eq. (8). Figures 4(e) and 4(f) show the reconstructed light field at the most left (0°, 10.8°) and the most right (0°, −10.8°) perspective observation positions in horizontal direction, respectively. Comparing these two images, we can find that the first layer (the resolution testing target) shifts obviously relative to the second layer (the stem slice). Because there is an interval of 1280 um between the two layers, as shown in Figures 4(e) and 4(f), their positions change with the view angle when observed at different directions. The same effect could also be found in the vertical direction shown in Figures 4(g) and 4(h). The rectangles in broken line indicate the real positions of the resolution target bars. This kind of position change is vividly displayed in Video 1 (Multimedia view) and Video 2 (Multimedia view), where the object rotates within the angle range of 21.6° in the horizontal and vertical directions obviously. In the above computations, the NA is assumed to be 0.15.

FIG. 4.

Images of the simulation on PIE and LMI. (a) Part of a resolution testing target that was used as the first layer of the sample. (b) The stem slice of dicotyledon used as the second layer, and the insert is one of the diffraction patterns. (c) The transmission image of the two-layer sample focused on the second layer. (d) A slightly off-focus image with 50 um from the image in (c). (e) The perspective view at the view angle of (0°, 10.8°). (f) The perspective view at the view angle of (0°, −10.8°). (g) The perspective view at the view angle of (10.8°, 0°). (h) The perspective view at the view angle of (−10.8°, 0°). (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.1][URL: http://dx.doi.org/10.1063/1.4897380.2]

FIG. 4.

Images of the simulation on PIE and LMI. (a) Part of a resolution testing target that was used as the first layer of the sample. (b) The stem slice of dicotyledon used as the second layer, and the insert is one of the diffraction patterns. (c) The transmission image of the two-layer sample focused on the second layer. (d) A slightly off-focus image with 50 um from the image in (c). (e) The perspective view at the view angle of (0°, 10.8°). (f) The perspective view at the view angle of (0°, −10.8°). (g) The perspective view at the view angle of (10.8°, 0°). (h) The perspective view at the view angle of (−10.8°, 0°). (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.1][URL: http://dx.doi.org/10.1063/1.4897380.2]

Close modal

With the PIE experimental setup shown in Figure 5(a), an experiment is done with a biological specimen of a fixed ant to demonstrate the feasibility of LMI in the coherent diffraction imaging. The pinhole with diameter of 1.5 mm is illuminated by a sphere laser beam of 632.8 nm in wavelength, and a CCD of 2048 × 2048 pixels behind the sample is used to record the diffraction patterns. The distances of the specimen to the pinhole and the CCD camera are 137.5 mm and 120.5 mm, respectively. Figure 5(b) shows six diffraction patterns of adjacent illuminating areas that partially overlaps with each other. After all 10 × 10 frames of the diffraction patterns are recorded during the raster scanning of sample relative to the illumination, the PIE algorithm described above is used to reconstruct the amplitude and phase of the object transmission function. Figure 5(c) shows the image of the specimen, which is reconstructed at spatial resolution of 7.4 um. The size of the specimen is about 5 mm × 2 mm and has appreciable stereoscopic structure, but because PIE is a 2D algorithm, the reconstructed image in Figure 5(c) shows no depth information of the specimen. This is one of the disadvantages of the common PIE algorithm, hence we need to show the 3D structure of the specimen with LMI method. A slightly defocused image shown in Figure 5(d) is obtained by numerically propagating Figure 5(c) for a distance of 500 um. With the LMI algorithm, the final experimental result is shown in Video 3 (Multimedia view), where each frame of the perspective shifted image L is relevant to a different (tanθx, tanθy) value with respect to the viewpoint translating horizontally, where tanθx is gradually changed in a range of (−10.8°, 10.8°) while tanθy remains unchanged. Each frame of the Video is normalized, presenting an efficient animation of the scene.

FIG. 5.

The PIE experimental setup and experimental results. (a) The PIE experimental setup. (b) Some of the recorded diffraction patterns of adjacent positions. (c) The image retrieved by PIE experiment. (d) Computed equivalent defocus image obtained by propagating (c) for a distance of 500 um. Perspective shifted animation of the ant specimen is shown in Video 3. (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.3]

FIG. 5.

The PIE experimental setup and experimental results. (a) The PIE experimental setup. (b) Some of the recorded diffraction patterns of adjacent positions. (c) The image retrieved by PIE experiment. (d) Computed equivalent defocus image obtained by propagating (c) for a distance of 500 um. Perspective shifted animation of the ant specimen is shown in Video 3. (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.3]

Close modal

Since the light field calculated directly with the PIE reconstruction is essentially the transmitted light field of the specimen under planar illumination, just as shown in Figure 6(a), the specimen can only be observed with a limited view angle (about 10°). This is the reason why the rotation of the specimen is not remarkable in Video 3 (Multimedia view). For LMI with common bright field microscope,11 the light from the xenon lamp firstly passes through a diffuse-glass and then illuminates on the sample, so the transmitted light of the sample can be detected at large view angle, and this is the reason why quite large rotating angle can be reached with microscope.11 To make the specimen viewable at wide angle also with PIE based LMI, a virtual diffuse-glass can be placed before the sample to scatter the planar illumination into countless directions. To show the effect of the diffuser plate in Figure 6(b), a random complex matrix is multiplied to the original PIE reconstruction and then propagated a tiny distance to get the slightly off-focus image for the LMI. The final result is vividly shown in Video 4 (Multimedia view), and Figure 6(c) is one frame of the computations. In comparison with Video 3 (Multimedia view), the rotation angle of the sample in Figure 6(c) is remarkably increased. However, due to the existence of random diffuser, remarkable speckle noise can be observed in the image. In optical imaging or metrology, a widely used technique to remove the coherent speckle noise is the adoption of a dynamic random illumination, and in most cases it is realized by passing the laser beam through a rotating frosted-glass. Mathematically, the effect of the dynamic illumination is equal to the summarization of the image intensities of different diffusers altogether. Based on this principle, 1000 computer generated random complex matrixs are used to generate the off-focus image shown in Figure 6(d), whose intensity is the average of the 1000 off-focus images with speckle noise. It is obvious that the speckle has been efficiently removed. The final LMI result is shown in Video 5 (Multimedia view), where we can find that clear images are obtained while large view angle (21.6°) is reached.

FIG. 6.

(a) The sample under planar illumination can be only be observed within a small view angle. (b) The view angle can be remarkably increased by placing a diffuser in front of the sample. (c) Computed defocus image with use of diffuser, and the related perspective shifted animation is shown in Video 4. (d) Computed defocus image with use of 1000 diffusers, and the perspective shifted animation of the ant specimen is shown in Video 5. (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.4] (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.5]

FIG. 6.

(a) The sample under planar illumination can be only be observed within a small view angle. (b) The view angle can be remarkably increased by placing a diffuser in front of the sample. (c) Computed defocus image with use of diffuser, and the related perspective shifted animation is shown in Video 4. (d) Computed defocus image with use of 1000 diffusers, and the perspective shifted animation of the ant specimen is shown in Video 5. (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.4] (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4897380.5]

Close modal

PIE based tomography and the Multi-slice PIE algorithms have the ability to generate quantitative 3D imaging of the specimen studied with quite complicated time-consuming computation. However, in many cases, a glimpse of the hierarchical structure is merely required, and the quick and simple 3D view of object is sufficient. The recently developed LMI technique provides a valuable approach for the PIE algorithm to realize the 3D perspective view of the sample observed. Since the second off-focus image needed by the LMI can be directly computed from the common PIE reconstruction with the Fresnel diffraction formula, 3D animations of the sample can be realized for PIE imaging without any change in the standard experiment procedure.

The view angle of the animation generated directly with the PIE reconstruction, which is equivalent to the transmitted field under planar illumination, is quite limited and can be remarkably increased by using a dynamic random illumination approach in the computation to scatter the illumination into countless directions. The feasibility of the proposed method is demonstrated with numerical simulations and visible light experiments. Many applications can be found for this proposed method in showing the 3D structure vividly for CDI with respect to X-ray and electron beams, where only 2D images are obtained and the stereoscopic structure of the sample are often lost.

This research is supported by the One Hundred Persons Project of Chinese Academy of Science.

1.
R. W.
Gercherg
and
O.
Saxton
,
Optik
35
,
237
(
1972
).
2.
J. R.
Fienup
,
Optics letters
3
,
27
(
1978
).
3.
J. R.
Fienup
,
Appl. Opt.
21
(
15
),
2758
2769
(
1982
).
4.
J. R.
Fienup
and
C. C.
Wackerman
,
J. Opt. Soc. Am.
A3
(
11
),
1897
1907
(
1986
).
5.
H. M. L.
Faulkner
and
J. M.
Rodenburg
,
Phys. Rev. Lett.
93
(
2
),
023903
(
2004
).
6.
J. M.
Rodenburg
and
H. M. L.
Faulkner
,
Appl. Phys. Lett.
85
(
20
),
4795
4797
(
2004
).
7.
A. M.
Maiden
and
J. M.
Rodenburg
,
Ultramicroscopy
109
,
1256
1262
(
2009
).
8.
M.
Dierolf
,
A.
Menzel
,
P.
Thibault
,
P.
Schneider
,
C. M.
Kewish
,
R.
Wepf
,
O.
Bunk
, and
F.
Pfeiffer
,
Nature
467
(
7314
),
436
439
(
2010
).
9.
I.
Peterson
,
B.
Abbey
,
C. T.
Putkunz
,
D. J.
Putkunz
,
D. J.
Vine
,
G. A.
van Riessen
,
G. A.
Cadenazzi
,
E.
Balaur
,
R.
Ryan
,
H. M.
Quiney
,
I.
McNulty
,
A. G.
Peele
, and
K. A.
Nugent
,
Opt. Express
20
(
22
),
24678
24685
(
2012
).
10.
A. M.
Maiden
,
M. J.
Humphry
, and
J. M.
Rodenburg
,
JOSA A.
29
(
8
),
1606
1614
(
2012
).
11.
A. G.
Orth
and
K. B.
Crozier
,
Opt. Lett.
38
(
15
),
2666
2668
(
2013
).
12.
Caroline Perry, “Seeing depth through a single lens,” (Cambridge, Mass. - August 5, 2013). http://www.seas.harvard.edu/news/2013/08/seeing-depth-through-single-lens.
13.
T.
Wilson
, Academic Press: London, etc, 426, 1–64 (1990).
14.
J.
Ščučka
, ETH, Swiss Federal Institute of Technology, Institute of Geodesy and Photogrammetry (2003).
15.
H. W.
Schreier
,
D.
Garcia
, and
M. A.
Sutton
,
Experimental mechanics
44
(
3
),
278
288
(
2004
).
16.
J. C. H.
Spence
,
U.
Weierstall
, and
M.
Howells
,
Ultramicroscopy
101
(
2–4
),
149
152
(
2004
).
17.
B.
Abbey
,
K. A.
Nugent
,
G. J.
Williams
,
J. N.
Clark
,
A. G.
Peele
,
M. A.
Pfeifer
,
M. D.
Jonge
, and
I.
McNulty
,
Nat. Phys.
4
(
5
),
394
398
(
2008
).