Non-scanning, single-shot, 3D integral microscopy with optical sectioning is presented. The method is based on the combination of Fourier-mode integral microscopy with a 3D deconvolution technique. Specifically, the refocused volume provided by a regular back-projection algorithm is 3D deconvolved with a synthetic 3D impulse response function that takes into account the number and positions of the elemental images. The use of this hybrid technique provides a stack of true-color depth-refocused images with significant gain of optical sectioning. The stack can be used, among other applications, to inspect inside the thick microscope specimen, to calculate collections of perspective views with fine angular resolution and extended full parallax, and also to display 3D images in an integral monitor. The method here presented is validated with both simulation and experimental data.
Integral imaging (InI) has been shown as a powerful technique when it comes to the display and reconstruction of the information contained in 3D scenes. Based on a principle proposed by Lippmann1 in 1908, InI has been increasingly used and proved for many different applications, from standard photography2,3 and 3D display4–9 to biomedical research.10,11 InI is based on the capture, either sequentially or after a single-shot, of different perspectives of a 3D scene. The combination of the several views in one picture breaks with the convention of photography, where a 3D scene is recorded in a 2D sensor in such a way that all the angular information is lost. There are mainly two approaches for capturing an integral image: by using a micro-lens array (MLA) or by using a set of cameras placed at different positions. The latter can be implemented by means of a single-camera that sequentially captures the different perspectives of a scene.12 In both, the raw data are a collection of 2D images, known as elemental images (EIs). This information can be used for performing a plane by plane refocusing of the scene.13–15 There are several approaches for performing this task. The most common method of refocusing consists in properly shifting and summing the EIs.16
The ability to carry out this depth reconstruction has been widely exploited in macroscopic imaging and, with a minor impact, in microscopy.17–20 One of the limitations of the technique, which we address in this paper, is the lack of optical sectioning in the images provided by standard reconstruction methods. In the past, optical sectioning has been achieved by means of computational methods for coherent imaging in Optical Scanning Holography,21–26 including the use of a Wiener-like filter for removing the out-of-focus information. In this manuscript, we take advantage of the information provided by Fourier Integral Imaging (FIMic)27 in order to describe a computational method that provides optical sectioning. To this aim, we first consider a recent reconstruction method which consists of generating a set of 2D periodic impulse responses that are sequentially used for 2D deconvolving the integral image by means of a Wiener-like filter.28 Now, in this paper, we extend the concept in order to create a 3D impulse response that can be combined with the acquisition of multi-perspective images in Fourier integral microscopy to generate a 3D real-color reconstruction of the sample by means of a single-shot. The approach is based on a two-step post-processing algorithm: First, from the raw data, a conventional integral image refocusing algorithm is applied to the capture to generate a 3D stack.
Let us consider a FIMic.29 In this setup, different perspectives or elemental images of a microscopic sample are acquired directly after a single shot. A scheme of this microscope is shown in Fig. 1.
Scheme of a FIMic. A MLA is placed at the aperture stop of the microscope objective (MO) to capture directly a collection of perspective views of a 3D sample.
Scheme of a FIMic. A MLA is placed at the aperture stop of the microscope objective (MO) to capture directly a collection of perspective views of a 3D sample.
If we assume that the system is set in such a way that the diffraction Airy disk in each elemental image is equal to or smaller than the pixel size, we can neglect any influence of diffraction effects onto the structure of captured elemental images. In such case, the irradiance distribution captured by the sensor can be expressed as28
with ⊗2 being the 2D convolution operation and M the lateral magnification provided by the microscope. Considering a squared lens matrix, the impulse response function (IRF) of the whole system is given by
where δ refers to the Dirac delta function, p is the period of the IRF, and the vector m = (mx, my) denotes the microlens index in both transverse directions. This vector is defined from the center of the central lens to the center of each lens so m allows negative values. Note that, in Eq. (1), we assume an object of compact support which is captured by all the lenses.
A plane-by-plane reconstruction can be done with this setup, considering the shape of the impulse response function. Based on this, a plane-by-plane reconstruction can be made by deconvolving the InI with a set of computed IRFs with different periods. This type of reconstruction is completely equivalent to the conventional method, in which the elemental images are superimposed and added in order to back propagate the lightfield. However, it provides a significantly different background for understanding the InI method as an inverse problem.
We can make use of this concept for proposing a 3D reconstruction method. The plane-by-plane reconstruction from the EIs can be considered as a compilation of 2D refocused images into a 3D stack and can be expressed as
with Q being the number of reconstructed planes. In the above expression, has the contribution of out-of-focus planes to every reconstructed plane.28 As it can be seen from Eq. (3), every reconstructed plane has the contribution of the rest of defocused planes.
The 3D stack can be calculated by any backpropagation algorithm. However, for computational optimization, it is worth using the 2D deconvolution method as the function can be stored in memory. Then, the 3D deconvolution between the 3D stack and the computed 3D IRF is calculated. This can be done by using several methods, but for the sake of simplicity, we use the Wiener-like filter for the 3D deconvolution
where the symbol ∼ denotes the 3D Fourier transform operation, and (u, w) are the 3D spatial frequency coordinates, and w refers to the Wiener parameter.30 Note that this operation strongly depends on the value selected for the Wiener parameter. For obtaining optimal results, this value must be .
By making the 3D inverse Fourier transform of Eq. (3), we recover the 3D deconvolved integral imaging reconstruction
where denotes the inverse Fourier transform operation.
For proving the 3D reconstruction method, we first simulated a two object 3D scene. The scene consists of a red circle placed in front of a green square. The square is partially occluded by the circle. In order to graphically display this concept, a scheme of the proposed reconstruction method can be seen in Fig. 2. In this figure, the reconstruction space of a cross-section is illustrated in a transverse direction in which the capture was done with three lenses (note that the capture scheme for this reconstruction is the same as the one illustrated in Fig. 1). As shown, the IRF in that plane can be understood as three lines crossing in the center of the reconstruction space. On the other hand, the simulated integral image is shown in Fig. 3(a). The integral image is composed of 3 × 3 EIs. The 2D deconvolution algorithm28 is applied to the EI matrix in order to generate a 3D stack. The synthetic impulse responses used for creating the stack were stored in memory in order to have the 3D impulse of the reconstructed space. Both stacks, the one obtained by the conventional back-propagating reconstruction and the other resulting from the 3D deconvolution, are rendered in a 3D projection of the intensity. A conventional tool of the ImajeJ software was used for this task. In Figs. 3(b) and 3(c) and Figs. 3(d) and 3(e), we show two views of the render obtained by using the 3D stack and by using the 3D deconvolution method, respectively. As it can be seen, in the 3D deconvolved space, most of the light coming from out-of-focus planes is removed while preserving the light coming from the object.
(top) Scheme of a cross-section of the 3D reconstruction space in a transverse direction in which three lenses captured the views of the scene. (bottom) Representation of the axial IRF corresponding to the cross-sectional plane.
(top) Scheme of a cross-section of the 3D reconstruction space in a transverse direction in which three lenses captured the views of the scene. (bottom) Representation of the axial IRF corresponding to the cross-sectional plane.
(a) Simulation of an EI matrix in which a red circle and a green square are separated in depth. (b) and (c) Orthogonal views of the conventional reconstruction used in integral imaging systems. (d) and (e) Orthogonal views reconstructed by using the proposed method.
(a) Simulation of an EI matrix in which a red circle and a green square are separated in depth. (b) and (c) Orthogonal views of the conventional reconstruction used in integral imaging systems. (d) and (e) Orthogonal views reconstructed by using the proposed method.
Note that the method can be applied to an image with more than 3 × 3 elemental images. However, one of the simplest cases which is a 3 × 3 EI matrix provides an optimum 3D reconstruction, optimizing the use of computer memory and computing time.
To validate our method through experimental data, we built a Fourier-plane integral microscope27 as the one shown in the scheme of Fig. 1, in which a lens array is placed in the Fourier plane. We use an infinite-corrected 20×, NA = 0.4 microscope objective, a MLA composed of lenses of 6.48 mm of focal length and 1 mm of pitch, and a relay system with 2× magnification to image the aperture stop of the microscope objective into the lens matrix. Since the lens matrix divides the aperture stop in 3 subapertures in the transverse direction, the effective NA of the elemental images is reduced by a factor of 3. On the other hand, the depth of field is approximately 3 times larger than the one from the microscope objective, which is convenient for thick samples and to maximize the number of planes that can be reconstructed. Finally, we used a color CMOS camera with 1260 × 980 pixels of 6.9 μm of size.
With this setup, we captured a bright-field image (without sample) as a calibration image. The reconstruction algorithm required this image for detecting the center of the lenses in order to perform a proper reconstruction. Since the lenses were arranged in a hexagonal geometry, we computed the impulse response function by defining the unitary vectors of each lens with respect to the central lens, which is equivalent of measuring the direction of the vector m in Eq. (2). The IRF can be calculated by using these vectors to build the lines that pass through the center of the reconstructed space that uniquely define the propagation in the mentioned reconstructed space (see Fig. 4, Multimedia view).
3D IRF computed from the integral image. (a) X-Y projection and (b) X-Z projection of the IRF. Multimedia views: https://doi.org/10.1063/1.5049755.1 ; https://doi.org/10.1063/1.5049755.2
3D IRF computed from the integral image. (a) X-Y projection and (b) X-Z projection of the IRF. Multimedia views: https://doi.org/10.1063/1.5049755.1 ; https://doi.org/10.1063/1.5049755.2
After that, we used a microscopic sample consisting of fluorescent dyed cotton fibers. In Fig. 5, we show the collection of EIs captured with our Fourier-integral microscope. From the perspective views registered in the camera sensor, we obtained different focal planes of the 3D sample by applying the conventional reconstruction algorithm. The reconstructions are shown in Fig. 6(a). In Fig. 6(b), we show the same depth planes as the ones of Fig. 6(a) after the application of our proposed method. Note that in that case, the out-of-focus planes do not contribute to the final reconstructed image (Multimedia views).
Raw data of a fluorescent dyed cotton fiber sample obtained with the experimental setup.
Raw data of a fluorescent dyed cotton fiber sample obtained with the experimental setup.
Different focused planes obtained by (a) the conventional reconstruction method and (b) the proposed method. Multimedia views: https://doi.org/10.1063/1.5049755.3 ; https://doi.org/10.1063/1.5049755.4
Different focused planes obtained by (a) the conventional reconstruction method and (b) the proposed method. Multimedia views: https://doi.org/10.1063/1.5049755.3 ; https://doi.org/10.1063/1.5049755.4
This optical sectioning capability can also be visualized by the 3D projection of intensity, as we can see in Fig. 7(a) for the conventional reconstruction and Fig. 7(b) for the proposed 3D deconvolution algorithm (Multimedia views).
(a) 3D render of the conventional reconstruction stack. (b) Reconstruction provided by the proposed method. Multimedia views: https://doi.org/10.1063/1.5049755.5 ; https://doi.org/10.1063/1.5049755.6
(a) 3D render of the conventional reconstruction stack. (b) Reconstruction provided by the proposed method. Multimedia views: https://doi.org/10.1063/1.5049755.5 ; https://doi.org/10.1063/1.5049755.6
Finally, we performed a second experiment with a Hi-resolution negative USAF test target. In Fig. 8(a), we can see the test reconstructed from the conventional algorithm. In Multimedia view, the Y-Z projection of the reconstruction is shown. In a similar way, we can see from Fig. 8(b) the orthogonal views obtained with our 3D deconvolution method (see Multimedia view).
Orthogonal views of a Hi-resolution test target with (a) conventional method and (b) 3D deconvolution algorithm. Multimedia views: https://doi.org/10.1063/1.5049755.7 ; https://doi.org/10.1063/1.5049755.8
Orthogonal views of a Hi-resolution test target with (a) conventional method and (b) 3D deconvolution algorithm. Multimedia views: https://doi.org/10.1063/1.5049755.7 ; https://doi.org/10.1063/1.5049755.8
Summarizing, non-scanning, single-shot 3D integral microscopy with optical sectioning has been presented. The method is based on a 3D deconvolution between the impulse response function of the 3D imaging system and the image volume reconstructed by any regular back-projection method. To produce the impulse response function, the relative positions of the elemental images over the sensor area is only needed input. With the 3D position dependent imaging response, a conventional Wiener-like filter for the 3D deconvolution is applied to the low contrast image volume produced by a regular back-projection method. The result is a high contrast 3D volume of true-color stacked images that exhibit a removal of the light from the out-of-focus planes. Because of the non-scanning and single shot features of this approach, it has potential applications in video-rate 3D reconstruction of 3D microscopic samples.
This research was funded by the Spanish Ministry of the Economy and Competitiveness (DPI2015-66458-C2-1R), by the European Regional Development Fund and also by the GVA, Spain (PROMETEOII/2014/072). J. Garcia-Sucerquia acknowledges the University of Valencia for a Visiting Professor fellowship.