Single-objective scanning light sheet (SOLS) imaging has fueled major advances in volumetric bioimaging because it supports low phototoxic, high-resolution imaging over an extended period. The remote imaging unit in the SOLS does not use a conventional epifluorescence image detection scheme (a single tube lens). In this paper, we propose a technique called the computational SOLS (cSOLS) that achieves light sheet imaging without the remote imaging unit. Using a single microlens array after the tube lens (lightfield imaging), the cSOLS is immediately compatible with conventional epifluorescence detection. The core of cSOLS is a Fast Optical Ray (FOR) model. FOR generates 3D imaging volume (40 × 40 × 14 µm3) using 2D lightfield images taken under SOLS illumination within 0.5 s on a standard central processing unit (CPU) without multicore parallel processing. In comparison with traditional lightfield retrieval approaches, FOR reassigns fluorescence photons and removes out-of-focus light to improve optical sectioning by a factor of 2, thereby achieving a spatial resolution of 1.59 × 1.92 × 1.39 µm3. cSOLS with FOR can be tuned over a range of oblique illumination angles and directions and, therefore, paves the way for next-generation SOLS imaging. cSOLS marks an important and exciting development of SOLS imaging with computational imaging capabilities.

Single-objective scanning light sheet (SOLS) using oblique plane (OP) illumination techniques has pushed the limits of image-based biological studies such as the quantification of single-molecule dynamics in living cells1 to calcium signaling in neuronal circuits in living mice.2 All existing single-objective scanning light sheet systems require costly and more complex remote imaging units that comprise one or two complementary objective lenses (secondary and tertiary objective lenses)3 to achieve diffraction-limited imaging and optical sectioning. Additional elements, such as tailor-made objective lens, tilted mirror, or diffraction grating, are required to effectively image an oblique 3D sample slice on a 2D imaging sensor.4–6 These remote focusing units are incompatible with conventional epifluorescence detection paths with a single tube lens and a 2D camera sensor.7 As such, single-objective scanning light sheet systems [e.g., eSPIM (epi-illumination Selective-Plane Illumination microscopy) and SCAPE (Swept Confocally-Aligned Planar Excitation microscopy)] are only limited to specialized microscopy setups.1,2

Lightfield8 is a special class of single-shot volumetric fluorescence imaging that is directly compatible with a standard epifluorescence imaging. Lightfield leverages computational imaging that performs 3D depth retrieval using a single 2D lightfield image data. Because of this, lightfield imaging has been successfully integrated into conventional epifluorescence detection.8,9 Lightfield imaging systems have recently reached unprecedented recording speeds under ultra-low light10 and highly scattering conditions,9,11 expanding the imaging field of view into the millimeter range at single-cell imaging resolution.12 There are a wide variety of computational lightfield tools ranging from ray optics interpolation8 and deconvolution iterations13 to deep learning reconstruction.14 All lightfield computational tools are designed to identify 3D information (x, y, z) of an object based on the angular disparity (r, θ) that is encoded within lightfield images generated by the microlens array (MLA).8 

In conventional epifluorescence detection, 3D sample imaging requires sequential axial scanning. In a SOLS system, the functional use of a remote imaging unit is to (1) image an oblique 3D sample slice without sequential axial scanning and (2) fulfill the Sine and Hershel conditions for spherical aberration-free imaging over depth.1 Fluorescence emitted from an oblique 3D sample slice is of varying angles along the inclined axial plane.3 This means that OP lightfield images alone possess angular disparities of the fluorescence points. This meant that axial fluorescence points can be retrieved so as to identify oblique 3D sample slices without remote imaging unit or sequential axial scanning. In SOLS, the remote imaging unit fulfills the Sine and Hershel conditions. On the other hand, a computational SOLS that uses a microlens element may no longer match Sine and Hershel conditions. As the result, a cSOLS may operate with a reduced axial imaging range. On the contrary, cSOLS will open up new computational capabilities such as computational adaptive optics to achieve isotropic resolution10 that is not present in any SOLS systems. In principle, an SOLS system could just require a lightfield imaging unit (a single micro-lens array and a 2D imaging camera) that is fully compatible with conventional epifluorescence detection and replaces the bulky remote imaging unit in SOLS.

However, we observed that OP lightfield images present a computational challenge to process using current lightfield depth retrieval tools. Figure 1 aims to illustrate this challenge. Figure 1(a) shows single-objective lens OP illuminating (blue line) over a single 1 µm fluorescence microsphere (red circle) at different angles (α1 = 0°, α2 = 30°, and α3 = 60°). Figure 1(b) shows the corresponding lightfield images. Using lightfield depth retrieval tools,8 we retrieved the point spread function (PSF) from each of the LF images. Figure 1(c) shows that the retrieved PSFs from all three distinctively different angles are almost identical. The result shows that the effective imaging PSF (PSFimaging) is incorrectly represented. This is because PSFimaging with an oblique beam is expressed by multiplying OP illumination PSF (ObliquePSFillumination) with detection PSF (PSFdetection) as shown in the following equation:15 

(1)
FIG. 1.

(a) An illustration showing lightfield imaging of a sub-resolution fluorescence sample under OP illumination at different angles (α1 = 0°, α2 = 30°, and α3 = 60°). (b) Lightfield images captured for the three illumination conditions. (c) Transverse (XY) and axial (XZ) PSFimaging retrieved from lightfield images using a lightfield depth retrieval tool. Gray eclipses show the ideal PSFimaging corresponding to OP illumination of different angles. Scale bar: 2 µm.

FIG. 1.

(a) An illustration showing lightfield imaging of a sub-resolution fluorescence sample under OP illumination at different angles (α1 = 0°, α2 = 30°, and α3 = 60°). (b) Lightfield images captured for the three illumination conditions. (c) Transverse (XY) and axial (XZ) PSFimaging retrieved from lightfield images using a lightfield depth retrieval tool. Gray eclipses show the ideal PSFimaging corresponding to OP illumination of different angles. Scale bar: 2 µm.

Close modal

Based on the three different illumination angles, the depth retrieved PSFimaging, in Fig. 1(c), must present a skewed intensity profile2 (gray ellipse). Because lightfield detection results in a lower spatial resolution compared to the diffraction-limit resolution of the objective lens,13 we anticipate that PSFdetection has a wider spatial extent than a confined ObliquePSFillumination of thin thickness. This means ObliquePSFillumination can result in significant spatial modulation over PSFimaging in the axial direction through Eq. (1). An incorrect PSFimaging would, therefore, result in inaccuracy in depth retrieval and results in poor Richardson–Lucy deconvolution.16 It is, therefore, necessary to restore PSFimaging. Similar to photon reassignment,17 we proposed that this can be achieved computationally by reassigning fluorescence signal onto an OP illumination plane.

In this Letter, we proposed to build a computational image reassignment model for OP illuminated lightfield systems called Fast Optical Ray (FOR) model. To distinguish the FOR model from lightfield depth retrieval methods,8,13,14 we term our computational approach as lightfield depth mapping. FOR model lightfield depth mapping is a ray optics algorithm that rapidly identifies a 3D spatial map of any OP illumination plane. The fluorescence signal from each OP lightfield image is then computationally mapped into their respective spatial positions in 3D (x, y, and z) at two planes per second. The OP illumination is generated using a scanning oblique plane illumination (SOPi) setup that offers greater flexibility to adjust different OP illumination planes.18 For lightfield detection, we constructed a scheme with an unfocused lightfield detection where a microlens array (MLA) is placed at the image plane of a conventional epi-fluorescence microscope. To ensure lightfield imaging without overlapping lenslet images, we underfilled each microlens by having an imaging-side NA smaller than the NA of the MLA (Thorlabs WFS150M-5C). With the 1.3NA objective lens (Olympus UPLFLN100XOI) used in our system, the maximum oblique angle achieved in our system is ∼60°. By limiting the beam diameter using an iris, the OP illumination possessed an experimentally determined thickness of ∼1 µm at the beam waist. Details of the system can be found in the supplementary material, Sec. S1.

Before we introduce the experimental results, we shall first explain the theoretical outline of the FOR model illustrated in Fig. 2(a). The FOR model uses ray transfer matrix analysis, where individual rays of a voxel V are mapped from the object space to the sensor plane S as shown in the following equation:

(2)

where the voxel’s spatial coordinates are x, y, z, rays emitted from the voxel are indexed by N, a ray’s coordinates on the aperture of the objective lens are given by x′, y′, a ray’s coordinates on the sensor plane are x, y, the free space transfer term is [P], and the thin lens transfer terms for the objective lens, tube lens, and MLA are defined as [Lobj], [Ltube], and [LMLA], respectively. The FOR model performs 4 × 4 ray transfer analysis to model XZ and YZ propagations of rays from a voxel V to the sensor plane S. We model XZ and YZ propagations separately using two sets of 2 × 2 ray transfer matrices. The results are then combined to express a ray’s 2D divergence angles from the optical axis. For simplicity, Fig. 2(a) illustrates the XZ propagation paths and ray transfer terms. V is formed by the discrete sampling of the object space using lateral and axial sampling factors δxy and δz. δxy and δz together represent the smallest voxel in the object space that the FOR model can analyze. Using the focal point of the objective lens as the origin, a dataset consisting of voxels V in the object space with spatial coordinates x, y, z (multiples of δxy and δz) relative to the focal point is generated. By evenly distributing N rays emitted from the voxel V on the aperture of the objective lens with spatial coordinates x′, y′ where the rays enter the imaging system from different angles, the FOR model samples the angular information of signal from the voxel V. By indexing each unique ray with N, SN represents the lateral coordinates of the ray VN on the sensor plane.

FIG. 2.

(a) A schematic of FOR model showing a voxel Vx,y,z with four light rays V1V4 traced from the object space to coordinates S1S4 on the sensor plane, then mapped to pixels LF1LF4 of a lightfield image. K1 and K2 denote OP illumination slices. (b) (i) A schematic showing FOR model depth mapping extracts excited voxels from OP illumination slices K1–K3 to form (ii) OP images P1 and P3. (iii) Reassignment is applied to convert OP images to a Z-stack containing slices Z1 and Z4.

FIG. 2.

(a) A schematic of FOR model showing a voxel Vx,y,z with four light rays V1V4 traced from the object space to coordinates S1S4 on the sensor plane, then mapped to pixels LF1LF4 of a lightfield image. K1 and K2 denote OP illumination slices. (b) (i) A schematic showing FOR model depth mapping extracts excited voxels from OP illumination slices K1–K3 to form (ii) OP images P1 and P3. (iii) Reassignment is applied to convert OP images to a Z-stack containing slices Z1 and Z4.

Close modal

In Fig. 2(a), we showed an OP (blue line, denoted by K2) that is detected on the imaging sensor after the MLA. Next, we determine the expected pixels that would create the following lightfield image. The following equation describes how the sensor plane S is mapped to the lightfield image LF,

(3)

where the sensor’s pixel size is U, and the lightfield image pixel coordinates are l, m. For illustration purposes, Fig. 2(a) shows four light rays V1 to V4 that are focused through the MLA onto the camera sensor at S1 to S4 and recorded as LF1 to LF4 . We then used Eq. (4) to calculate the expected intensity of the respective voxel,

(4)

The detected intensity I is the summation and averaging of all rays V1 to V4 . Since a camera sensor samples an image discretely, bilinear interpolation is applied at this step to construct each voxel intensities. Figure 2(b) illustrates how the FOR model maps OP illumination in object space to form a volumetric image. Figure 2(b)(i) shows diagonally tilted voxels across a single volume that are excited by three OP illumination slices (K1, K2, K3). The FOR model identifies the illumination slice, which is then used to determine the corresponding fluorescence intensities (for details, see the supplementary material, Sec. S2). Figure 2(b)(ii) shows that the chosen fluorescence intensities are then rearranged in separate columns of pixels. For a given illumination slice, we plot the respective OP images P shown as P1, P2, P3 in Fig. 2(b)(ii). Oblique fluorescence intensities are then mathematically reassigned back into the 3D positions in four different Z-slices as shown in Fig. 2(b)(iii). The reassignment process is given in the following equation:

(5)

where Pkz is the set of OP images and Zz is the slice at depth z. Both OP images and Z-slices have pixel coordinates i, j. kz is described in the supplementary material, Sec. S2, which is used to locate the OP image with index k, where pixel i, j is to be extracted from.

To fulfill the Nyquist sampling criterion in our discrete FOR model, we set δxy and δz as 0.25 and 0.5 µm, which are more than four times smaller than the theoretical lateral and axial resolution 1.35 and 2.16 µm of our lightfield detection (see details in the supplementary material, Sec. S1). The FOR model with fine lateral and axial sampling allows accurate determination of Sx,y and, therefore, Ix,y,z. Details on the impact of sampling density on FOR model depth mapping can be found in the supplementary material, Sec. S3. The current implementation of the FOR model utilizes paraxial ray transfer matrix analysis and, therefore, is subject to propagation inaccuracies associated with ray optics. However, this allows the FOR model to be highly efficient and adaptive to different optics including objective lenses and MLAs as paraxial thin lens modeling does not rely on the detailed parameters of optics (e.g., refractive index, surface curvature, thickness). To optimize for speed, we pre-compute the FOR model to generate a dataset of Sx,y of the lightfield detection. Sx,y applies to all lightfield images captured for different samples. This allows real-time depth mapping and image reassignment to be performed in MATLAB at 2 OP lightfield images per second. For imaging a volume of 40 × 40 × 14 µm3 with 60° OP illumination that is captured with 50 OP illuminated lightfields, current CPU-based (AMD Ryzen 7 5800X, 32 GB RAM, non-parallel processing) achieves FOR model depth mapping in 25 s. This could be shortened by 25 times after implementing multicore parallel processing, achieving full 3D depth mapping at 1 volume per second. In comparison, for the same imaging volume and the number of lightfields, lightfield depth retrieval based on deconvolution iteration13 takes ∼40 minutes to compute (see the supplementary material, Sec. S6). We anticipate that deep learning-based lightfield methods can achieve multiple volumes per second, however, requires a sophisticated model training and validation process.14 Computational considerations of FOR model generation and real-time depth mapping are summarized in the supplementary material, Sec. S4.

Next, we experimentally validate FOR model depth mapping for an OP illumination angle of 60° and compare it against lightfield depth retrieval based on ray optics.8 We used 1 µm diameter fluorescence microspheres (Polysciences Fluoresbrite YG Microspheres) that are below the resolution limit of the optical system. The supplementary material, Sec. S5, outlines FOR model parameters used for processing. Figure 3(a) shows the 1 µm microsphere retrieved without FOR model depth mapping showing a significant amount of out-of-focus blur in both XZ and YZ planes. However, the same microsphere processed by FOR model depth mapping demonstrates a greatly reduced out-of-focus signal by twofold in the vertical direction as shown in Figs. 3(b) and 4. Using FOR model depth mapping, we acquired the effective PSFimaging with vertical FWHMs (1.39 and 1.37 µm) in XZ and YZ that are close to the theoretical limit of the system, which is 1.35 µm with a profile almost identical to the ideal PSFimaging and in doing so achieves accurate depth sectioning of the microsphere. In Fig. 3(c), while lightfield depth retrieval and FOR model depth mapping demonstrate comparable XY FWHMs of 1.94 and 1.92 µm in the vertical direction, the effective PSFimaging acquired by FOR model depth mapping shows an asymmetry profile in the lateral direction of 1.59 µm. Noting that the asymmetry is aligned with the OP illumination sample scan direction in the lateral axis, it is possibly an artifact associated with the sample scanning or data interpolation. We anticipate that this artifact can be resolved by the OP illumination sample scan at finer steps and improved interpolation strategy (e.g., cubic interpolation) during FOR model depth mapping and image reassignment.

FIG. 3.

(a) XY, XZ, and YZ slices of a 1 µm fluorescent microsphere excited by scanned OP illumination at 60° and retrieved by (i) lightfield depth retrieval and (ii) FOR model depth mapping. Gray eclipses show the ideal PSFimaging. White lines are for axial FWHM measurements. Scale bar: 1 µm. (b) Bar plots of XZ and YZ vertical FWHM profiles of the 1 µm fluorescent microsphere. (c) Bar plots of XY FWHM profiles (both lateral and vertical) of the 1 µm fluorescent microsphere. Error bars are shown in blue. Data are mean values and standard deviations of five 1 µm fluorescent microspheres.

FIG. 3.

(a) XY, XZ, and YZ slices of a 1 µm fluorescent microsphere excited by scanned OP illumination at 60° and retrieved by (i) lightfield depth retrieval and (ii) FOR model depth mapping. Gray eclipses show the ideal PSFimaging. White lines are for axial FWHM measurements. Scale bar: 1 µm. (b) Bar plots of XZ and YZ vertical FWHM profiles of the 1 µm fluorescent microsphere. (c) Bar plots of XY FWHM profiles (both lateral and vertical) of the 1 µm fluorescent microsphere. Error bars are shown in blue. Data are mean values and standard deviations of five 1 µm fluorescent microspheres.

Close modal
FIG. 4.

1 µm fluorescent microspheres excited by scanned OP illumination at 60° and retrieved by (left) lightfield depth retrieval and (right) FOR model depth mapping (Multimedia view: https://doi.org/10.1063/5.0091615.1).

FIG. 4.

1 µm fluorescent microspheres excited by scanned OP illumination at 60° and retrieved by (left) lightfield depth retrieval and (right) FOR model depth mapping (Multimedia view: https://doi.org/10.1063/5.0091615.1).

Close modal

To further validate our model on densely packed samples, we performed imaging on the customized laser-written rigid fluorescence microstructure, illustrated in Fig. 5(a) (not drawn to scale). Using multiphoton lithography, we constructed a microstructure consisting of letters “A” (red) and “U” (blue) stacked vertically on a glass coverslip and by supporting structures (green). The 3D fabrication process is described in the supplementary material, Sec. S8. For comparison, we performed deconvolution on images acquired by FOR model depth mapping using the captured PSFimaging and compared with a lightfield depth retrieval method based on wave optics deconvolution iteration.13 Parameters for lightfield deconvolution can be found in the supplementary material, Sec. S6. Figure 5(b) shows XY slices containing letters “A” and “U” at z = −2 µm and z = 2 µm, respectively. This gives a small separation of 4 µm along the z axis between “A” and “U” to test axial sectioning. From Fig. 5(b), both lightfield depth retrieval and FOR model depth mapping appear to show the letters at their designated axial position accurately, excluding any out-of-focus intensity arising from the other letter located 4 µm away in z. However, upon closer examination of the YZ slice [Fig. 5(c)] across the center of the microstructure [blue dashed plane in Fig. 5(a)], we observed that lightfield depth retrieval, Fig. 5(c)(i), only eliminated half of the out-of-focus cones, while depth mapping, Fig. 5(c)(ii), shows no out-of-focus cones. Figure 5(d) plots the normalized axial intensity across the letter “A” [white lines in Fig. 5(c)] and quantifies the out-of-focus region. We found that lightfield depth retrieval results in an out-of-focus region extending over 8 µm (FWHM) in depth. Conversely, FOR model depth mapping achieves accurately mapped to z = −2 µm with an axial profile (FWHM) of 1.9 µm resulting in sharp axial fluorescence signal of the letter “A.” The results of FOR model depth mapping on densely packed fluorescence microstructures show significant improvement in depth sectioning over lightfield depth retrieval, as shown in Fig. 6. However, observed in Fig. 5(b) qualitatively that FOR model depth mapping demonstrates inferior lateral resolvability of the microstructure to the lightfield depth retrieval method.

FIG. 5.

(a) A schematic demonstrating imaging of the lithographic microstructure. Solid blue lines represent scanned OP illumination. Dashed blue lines represent the YZ slice across the center of the microstructure. (b) XY slices containing letters “A” and “U” at z = −2 µm and z = 2 µm were retrieved by (i) lightfield depth retrieval and (ii) FOR model depth mapping. Scale bar: 10 µm. (c) YZ slices across the center of the microstructure. Scale bar: 5 µm. (d) Normalized axial intensity profiles across letter “A.”

FIG. 5.

(a) A schematic demonstrating imaging of the lithographic microstructure. Solid blue lines represent scanned OP illumination. Dashed blue lines represent the YZ slice across the center of the microstructure. (b) XY slices containing letters “A” and “U” at z = −2 µm and z = 2 µm were retrieved by (i) lightfield depth retrieval and (ii) FOR model depth mapping. Scale bar: 10 µm. (c) YZ slices across the center of the microstructure. Scale bar: 5 µm. (d) Normalized axial intensity profiles across letter “A.”

Close modal
FIG. 6.

YZ slices across the center of the microstructure retrieved by (left) lightfield depth retrieval and (right) FOR model depth mapping (Multimedia view: https://doi.org/10.1063/5.0091615.2).

FIG. 6.

YZ slices across the center of the microstructure retrieved by (left) lightfield depth retrieval and (right) FOR model depth mapping (Multimedia view: https://doi.org/10.1063/5.0091615.2).

Close modal

In conclusion, we demonstrated a novel computational single-objective scanning light sheet (cSOLS) imaging technique enabled by our FOR model depth mapping lightfield algorithm. With the remote imaging unit removed, FOR model depth mapping allows SOLS imaging to be performed on conventional epifluorescence detection scheme. cSLOS can immediately be deployed in handheld SLOS 19 because of simplified detection scheme. cSOLS with FOR can be integrated into a wide range of existing high throughput imaging systems for high contents screening with the addition of a new OP illumination module. We also confirmed that conventional lightfield depth retrieval methods fail to retrieve OP illuminated images. FOR model depth mapping achieves an effective PSFimaging with axial FWHMs (1.39 and 1.37 µm) that are close to the lateral resolution limit (1.35 µm) of our lightfield detection.8 As shown in Table I, cSOLS with FOR achieved spatial resolution that is comparable to existing single-objective light sheet systems of comparable effective NA.20,21 We anticipate that the enhancement of optical sectioning achieved by FOR model depth mapping can immediately benefit high-contrast imaging of live developing zebrafishes.16 FOR model depth mapping achieves optical sectioning on densely packed structures that existing lightfield algorithms cannot.22 By adopting MLA-based lightfield detection, SOLS imaging can benefit from technological advances in lightfield computational imaging. cSOLS with FOR can be further tailored to the structured OP beam such as Airy beam light sheet for potential large field of view (FOV), high-contrast imaging with improved resistance to scattering in biological samples.23 Implementing a two-photon light sheet can also improve depth penetration when imaging thick biological samples. cSOLS with FOR has the potential to adopt computational adaptive optics for aberration correction without “guide star” or additional wavefront sensing and, in turn, promises imaging deep inside samples.10 The FOR model can also accommodate multidimensional OP illuminated lightfield spectral imaging24 and is able to computationally resolve multi-direction OP illumination for multi-view light sheet imaging for highly scattering samples.25 A key area of improvement in the FOR model is the remaining 7.5% mismatch between ObliquePSFillumination and PSFdetection due to experimental uncertainty in determining oblique angles, which can be improved using additional calibration steps.18 Furthermore, current FOR model depth mapping requires prior manual calibration of the illumination profile for effective optical sectioning. This calibration process can be automated by performing wavefront sensing of the illumination at the back focal plane of the objective lens. The current implementation of FOR model depth mapping considers OP illuminations of a uniform thickness for simplicity, which is not a realistic model as the Gaussian beam diverges in regions outside of the beam waist. The effect of beam divergence could be addressed in the FOR model by incorporating variable OP illumination thickness as a next step when adopting a large FOV. Here, we validated cSOLS with FOR to have a spatial resolution of 1.59 × 1.92 × 1.39 µm3 over an imaging depth range of 14 µm, which is lower than traditional SOLS that uses a remote focusing unit to satisfy the Sine condition (supplementary material, Sec. S7). The limited imaging depth is because of the focusing properties of MLA. Recent advance in liquid crystal MLA with tunable focusing and numerical aperture can improve the imaging depth of cSOLS.26 cSOLS with FOR also has anisotropic spatial resolution across depth, which is a factor inherent to lightfield imaging.9,13 This can be overcome by complete space multiscale lightfield modeling9 and computational adaptive optics10 to achieve FOR cSOLS imaging with aberration correction and isotropic resolution. We anticipate that FOR model depth mapping will adopt these computational methods to extend the imaging depth to 180 µm. Although the spatial resolution of cSOLS with FOR is around half of high-NA SOLS systems,4 this can be overcome by adopting scanning lightfield techniques10 or Fourier lightfield detection27 to reach ∼300 nm laterally and ∼700 nm axially that are comparable to both high-NA SOLS and orthogonal light sheet systems. An important future step for the FOR model is to adopt wave optics-based propagation and deconvolution to minimize propagation inaccuracy, increase the spatial resolution, and increase the signal-to-noise ratio. MLA-based lightfield detection is subject to the trade-off between spatial resolution and depth resolvability, which ultimately limits biological imaging capabilities. This can be alleviated by adopting MLA and a camera with finer pitch and pixel size. We envision that cSOLS using MLA with tunable optical properties28 and computational adaptive optics10 will reach subcellular imaging resolution with improved depth resolvability. Recent Compute Unified Device Architecture (CUDA) processing will also accelerate FOR model depth mapping volume rate to up to 30 volumes per second.8,29 Finally, we expect the FOR model to complement existing lightfield deep-learning methods to reach sub-cellular imaging resolution with simplified signal processing pipelines and at hundreds of volumes per second.14 

TABLE I.

Spatial resolution comparison of SOLS approaches.

SCAPE 2.021 1P SOPi20 cSOLS with FOR
Detection Remote imaging unit Remote imaging unit Lightfield 
Objective lens Olympus XLUMPLFLN 20x/1.0NA W Olympus XLUMPLFLN 20x/1.0NA W  
Nikon CFI Plan Apo Lambda 20x/0.75NA Olympus UPLSAPO 20x/0.75NA Olympus UPLFLN 
EO HR Nikon Plan Apo HR Olympus LUCPLFLN 20x/0.45NA 100x/1.3NA Oil 
20x/0.60NA 50x/0.75NA   
Effective detection NA 0.23 0.35 0.34 0.235 
X (µm) 1.47 1.21 1.30 (axis not specified) 1.59 
Y (µm) 0.86 0.60 1.92 
Z (µm) 1.96 1.55 1.39 
SCAPE 2.021 1P SOPi20 cSOLS with FOR
Detection Remote imaging unit Remote imaging unit Lightfield 
Objective lens Olympus XLUMPLFLN 20x/1.0NA W Olympus XLUMPLFLN 20x/1.0NA W  
Nikon CFI Plan Apo Lambda 20x/0.75NA Olympus UPLSAPO 20x/0.75NA Olympus UPLFLN 
EO HR Nikon Plan Apo HR Olympus LUCPLFLN 20x/0.45NA 100x/1.3NA Oil 
20x/0.60NA 50x/0.75NA   
Effective detection NA 0.23 0.35 0.34 0.235 
X (µm) 1.47 1.21 1.30 (axis not specified) 1.59 
Y (µm) 0.86 0.60 1.92 
Z (µm) 1.96 1.55 1.39 

See the supplementary material for detailed methods. Section S1 describes details of the optical system. Section S2 describes the modeling of scanned oblique plane illumination in the FOR model. Section S3 shows the impact of sampling on FOR model depth mapping performance. Section S4 outlines computational consideration of FOR model depth mapping. Section S5 summarizes FOR model depth mapping parameters for experimental data processing. Section S6 describes parameters for lightfield deconvolution. Section S7 shows resolution and imaging depth comparison with SOLS approaches. Section S8 describes the multiphoton lithography fabrication process.

The authors acknowledge the Australian Research Council (Grant Nos. DE160100843, DP190100039, DP200100364, and CE140100011) for financial support.

Woei Ming Lee and Tienan Xu have filed a Provisional Application (AU2022902123).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
B.
Yang
,
X.
Chen
,
Y.
Wang
,
S.
Feng
,
V.
Pessino
,
N.
Stuurman
,
N. H.
Cho
,
K. W.
Cheng
,
S. J.
Lord
,
L.
Xu
,
D.
Xie
,
R. D.
Mullins
,
M. D.
Leonetti
, and
B.
Huang
,
Nat. Methods
16
,
501
504
(
2019
).
2.
M. B.
Bouchard
,
V.
Voleti
,
C. S.
Mendes
,
C.
Lacefield
,
W. B.
Grueber
,
R. S.
Mann
,
R. M.
Bruno
, and
E. M. C.
Hillman
,
Nat. Photonics
9
,
113
119
(
2015
).
3.
C.
Dunsby
,
Opt. Express
16
,
20306
20316
(
2008
).
4.
E.
Sapoznik
,
B.-J.
Chang
,
J.
Huh
,
R. J.
Ju
,
E. V.
Azarova
,
T.
Pohlkamp
,
E. S.
Welf
,
D.
Broadbent
,
A. F.
Carisey
,
S. J.
Stehbens
,
K.-M.
Lee
,
A.
Marín
,
A. B.
Hanker
,
J. C.
Schmidt
,
C. L.
Arteaga
,
B.
Yang
,
Y.
Kobayashi
,
P. R.
Tata
,
R.
Kruithoff
,
K.
Doubrovinski
,
D. P.
Shepherd
,
A.
Millett-Sikking
,
A. G.
York
,
K. M.
Dean
, and
R. P.
Fiolka
,
eLife
9
,
e57681
(
2020
).
5.
J.
Kim
,
M.
Wojcik
,
Y.
Wang
,
S.
Moon
,
E. A.
Zin
,
N.
Marnani
,
Z. L.
Newman
,
J. G.
Flannery
,
K.
Xu
, and
X.
Zhang
,
Nat. Methods
16
,
853
857
(
2019
).
6.
M.
Hoffmann
and
B.
Judkewitz
,
Optica
6
,
1166
1170
(
2019
).
7.
8.
M.
Levoy
,
Z.
Zhang
, and
I.
McDowall
,
J. Microsc.
235
,
144
162
(
2009
).
9.
Y.
Zhang
,
Z.
Lu
,
J.
Wu
,
X.
Lin
,
D.
Jiang
,
Y.
Cai
,
J.
Xie
,
Y.
Wang
,
T.
Zhu
,
X.
Ji
, and
Q.
Dai
,
Nat. Commun.
12
,
6391
(
2021
).
10.
J.
Wu
,
Z.
Lu
,
D.
Jiang
,
Y.
Guo
,
H.
Qiao
,
Y.
Zhang
,
T.
Zhu
,
Y.
Cai
,
X.
Zhang
,
K.
Zhanghao
,
H.
Xie
,
T.
Yan
,
G.
Zhang
,
X.
Li
,
Z.
Jiang
,
X.
Lin
,
L.
Fang
,
B.
Zhou
,
P.
Xi
,
J.
Fan
,
L.
Yu
, and
Q.
Dai
,
Cell
184
,
3318
3332.e17
(
2021
).
11.
Y.
Zhang
,
B.
Xiong
,
Y.
Zhang
,
Z.
Lu
,
J.
Wu
, and
Q.
Dai
,
Light: Sci. Appl.
10
,
152
(
2021
).
12.
Y.
Xue
,
I. G.
Davison
,
D. A.
Boas
, and
L.
Tian
,
Sci. Adv.
6
,
eabb7508
(
2020
).
13.
M.
Broxton
,
L.
Grosenick
,
S.
Yang
,
N.
Cohen
,
A.
Andalman
,
K.
Deisseroth
, and
M.
Levoy
,
Opt. Express
21
,
25418
25439
(
2013
).
14.
Z.
Wang
,
L.
Zhu
,
H.
Zhang
,
G.
Li
,
C.
Yi
,
Y.
Li
,
Y.
Yang
,
Y.
Ding
,
M.
Zhen
,
S.
Gao
,
T. K.
Hsiai
, and
P.
Fei
,
Nat. Methods
18
,
551
556
(
2021
).
15.
K.
Becker
,
S.
Saghafi
,
M.
Pende
,
I.
Sabdyusheva-Litschauer
,
C. M.
Hahn
,
M.
Foroughipour
,
N.
Jährling
, and
H.-U.
Dodt
,
Sci. Rep.
9
,
17625
(
2019
).
16.
S.
Madaan
,
K.
Keomanee-Dizon
,
M.
Jones
,
C.
Zhong
,
A.
Nadtochiy
,
P.
Luu
,
S. E.
Fraser
, and
T. V.
Truong
,
Opt. Lett.
46
,
2860
2863
(
2021
).
17.
C. J. R.
Sheppard
,
M.
Castello
,
G.
Tortarolo
,
T.
Deguchi
,
S. V.
Koho
,
G.
Vicidomini
, and
A.
Diaspro
,
J. Opt. Soc. Am. A
37
,
154
162
(
2020
).
18.
M.
Kumar
and
Y.
Kozorovitskiy
,
Biomed. Opt. Express
11
,
3346
3359
(
2020
).
19.
K. B.
Patel
,
W.
Liang
,
M. J.
Casper
,
V.
Voleti
,
W.
Li
,
A. J.
Yagielski
,
H. T.
Zhao
,
C.
Perez Campos
,
G. S.
Lee
,
J. M.
Liu
,
E.
Philipone
,
A. J.
Yoon
,
K. P.
Olive
,
S. M.
Coley
, and
E. M. C.
Hillman
,
Nature Biomedical Engineering
6
,
569
583
(
2022
).
20.
M.
Kumar
,
S.
Kishore
,
J.
Nasenbeny
,
D. L.
McLean
, and
Y.
Kozorovitskiy
,
Opt. Express
26
,
13027
13041
(
2018
).
21.
V.
Voleti
,
K. B.
Patel
,
W.
Li
,
C.
Perez Campos
,
S.
Bharadwaj
,
H.
Yu
,
C.
Ford
,
M. J.
Casper
,
R. W.
Yan
,
W.
Liang
,
C.
Wen
,
K. D.
Kimura
,
K. L.
Targoff
, and
E. M. C.
Hillman
,
Nat. Methods
16
,
1054
1062
(
2019
).
22.
E.
Sánchez-Ortiga
,
G.
Scrofani
,
G.
Saavedra
, and
M.
Martinez-Corral
,
IEEE Access
8
,
14944
14952
(
2020
).
23.
T.
Vettenburg
,
H. I. C.
Dalgarno
,
J.
Nylk
,
C.
Coll-Lladó
,
D. E. K.
Ferrier
,
T.
Čižmár
,
F. J.
Gunn-Moore
, and
K.
Dholakia
,
Nat. Methods
11
,
541
544
(
2014
).
24.
J.
Park
,
X.
Feng
,
R.
Liang
, and
L.
Gao
,
Nat. Commun.
11
,
5602
(
2020
).
25.
B.
Yang
,
M.
Lange
,
A.
Millett-Sikking
,
X.
Zhao
,
J.
Bragantini
,
S.
VijayKumar
,
M.
Kamb
,
R.
Gómez-Sjöberg
,
A. C.
Solak
,
W.
Wang
,
H.
Kobayashi
,
M. N.
McCarroll
,
L. W.
Whitehead
,
R. P.
Fiolka
,
T. B.
Kornberg
,
A. G.
York
, and
L. A.
Royer
,
Nat. Methods
19
,
461
469
(
2022
).
26.
J. F.
Algorri
,
N.
Bennis
,
V.
Urruchi
,
P.
Morawiak
,
J. M.
Sánchez-Pena
, and
L. R.
Jaroszewicz
,
Sci. Rep.
7
,
17318
(
2017
).
27.
H.
Li
,
C.
Guo
,
D.
Kim-Holzapfel
,
W.
Li
,
Y.
Altshuller
,
B.
Schroeder
,
W.
Liu
,
Y.
Meng
,
J. B.
French
,
K.-I.
Takamaru
,
M. A.
Frohman
, and
S.
Jia
,
Biomed. Opt. Express
10
,
29
49
(
2019
).
28.
N.
Chronis
,
G. L.
Liu
,
K.-H.
Jeong
, and
L. P.
Lee
,
Opt. Express
11
,
2370
2378
(
2003
).
29.
M.
Guo
,
Y.
Li
,
Y.
Su
,
T.
Lambert
,
D. D.
Nogare
,
M. W.
Moyle
,
L. H.
Duncan
,
R.
Ikegami
,
A.
Santella
,
I.
Rey-Suarez
,
D.
Green
,
A.
Beiriger
,
J.
Chen
,
H.
Vishwasrao
,
S.
Ganesan
,
V.
Prince
,
J. C.
Waters
,
C. M.
Annunziata
,
M.
Hafner
,
W. A.
Mohler
,
A. B.
Chitnis
,
A.
Upadhyaya
,
T. B.
Usdin
,
Z.
Bao
,
D.
Colón-Ramos
,
P.
La Riviere
,
H.
Liu
,
Y.
Wu
, and
H.
Shroff
,
Nat. Biotechnol.
38
,
1337
1346
(
2020
).

Supplementary Material