Traditional lens-based three-dimensional imaging methods struggle with speed, spatial resolution, field of view, and depth of field (DOF). Here, we propose a volumetric imaging method that combines rainbow-sheet illumination, chromatic-aberration-induced DOF extension, and compressive hyperspectral imaging to optically section transparent objects over 200 depth slices in a single snapshot. A proof-of-concept mesoscopic system with a lateral resolution of 12.7 line pairs per millimeter and a depth resolution of roughly 140 μm in a volume of 10 × 10 × 10 mm3 is constructed. The practicality of the suggested method is demonstrated by dynamic volumetric imaging of a transparent jellyfish at a rate of 15 volumes per second.

Rapid volumetric imaging has long been a major challenge in modern optical microscopy. Camera-based methods offer the fastest data acquisition speed and the widest field of view (FOV),1–6 with light-sheet illumination standing out due to its excellent out-of-focus rejection and minimal photo-damage.7–10 However, acquiring a volumetric dataset generally necessitates axial translation of the sample or objective, and slow mechanical scanning becomes a major impediment to the speed of these methods.11,12 Alternatively, axial modulation of illumination intensity in conjunction with the depth-of-field (DOF) extension techniques allows for the simultaneous capture of signals from multiple depth planes, and volumetric information is reconstructed from a sequence of images recorded under modulated illumination.13–16 Whether mechanical scanning is necessary or not, the acquisition of an image sequence limits their application for investigating fast dynamic processes.

The instantaneous capture of a three-dimensional (3D) dataset using a two-dimensional (2D) array detector has recently gained popularity. Multi-focus imaging (MFI) entails projecting multiple depth planes onto different regions of a camera sensor.17,18 Light-field imaging (LFI), alternatively, distributes the aperture-encoded photons to different camera pixels and reconstructs the 3D image using a digital refocusing technique, but its axial resolvability is at the cost of sacrificing lateral resolution.5,6,19 Moreover, the out-of-focus blurring present either in the physically in-focused images in MFI or in the digitally refocused images in LFI reduces the axial resolution and the contrast.5,18,20 Single-frame volumetric imaging can also be implemented in a lensless form by randomizing the 3D point spread function (PSF); however, image reconstruction quality degrades rapidly with increasing depth due to the loss of high-frequency spatial information, making 3D deconvolution unstable and intractable.21,22 Another bottleneck in volumetric imaging is the space–bandwidth product (SBP), which is ultimately limited by the number of pixels in a sensor.23 Spreading a 3D dataset or 4D light-field across a 2D sensor decreases either the FOV or the lateral resolution. Compressive sensing is a potential way to relax this limit and has widely been used for single-snapshot acquisition of 3D data cubes.22,24–31 Depth information in these applications is encoded in photon arrival time28,29 or depth-varying PSF,32 with the former suffering from low axial resolution, while the latter requiring focal scanning. Compressive sensing has also recently been introduced into LFI for imaging single neuron activity, and the spatiotemporal sparsity is a necessary prerequisite.31 

Here, we present a method capable of performing snapshot volumetric imaging without mechanical or optical scanning. The method employs a stack of thin light sheets to illuminate the entire volume of an object from the side. Named rainbow sheets, their wavelengths are continuously increased with depth, so light scattering encodes the depth information in wavelength. The scattered photons are collected from an orthogonal direction using a triple-view compressive hyperspectral imager, which converts the wavelength-encoded 3D scene into a hyperspectral image by using chromatic relay optics and performs compressive measurements to reconstruct the object’s volumetric image. The triple-view compressive hyperspectral imager allows for the capture of a 3D object, under rainbow-sheet illumination, at a single view angle while recording three distinct projections. The proposed method inherits the high SBP of compressive sensing and the outstanding out-of-focus background rejection of light-sheet illumination, making it a promising tool for capturing fast dynamics in transparent biological samples.

Figure 1 depicts a proof-of-concept mesoscopic system employing the proposed method, which included a rainbow-sheet illumination component and a triple-view compressive hyperspectral imager. The illumination component was essentially a prism-based monochromator with a xenon lamp (Bobei Lighting, Inc., China; see Fig. S1 for its power spectrum), a 50 μm-wide slit, a collimating lens (AC254-200-A, Thorlabs), AL1, an equilateral prism made of N-BK7 glass (ELP0140, Hengyang Optics, Inc.), and a concave mirror of 500 mm in focal length, CM. The CM focused the dispersed light into a stack of light sheets near its focal plane, which were continuously distributed along the z-axis with increasing wavelength [see Fig. S2(a)].

FIG. 1.

Schematic diagram of the rainbow-sheet imaging method. (a) Optical setup of the mesoscopic system. (b) Specifications of the rainbow sheets. (c) Chromatic relay optics in the triple-view hyperspectral imager. The red lines represent the long-wavelength rays, whereas the blue lines represent the short-wavelength rays. AL, achromatic lens; BL, biconvex lens; BS, beam splitter; CM, concave mirror; FB, fiber bundle; G, grating; PM, photomask; and PS, periscope.

FIG. 1.

Schematic diagram of the rainbow-sheet imaging method. (a) Optical setup of the mesoscopic system. (b) Specifications of the rainbow sheets. (c) Chromatic relay optics in the triple-view hyperspectral imager. The red lines represent the long-wavelength rays, whereas the blue lines represent the short-wavelength rays. AL, achromatic lens; BL, biconvex lens; BS, beam splitter; CM, concave mirror; FB, fiber bundle; G, grating; PM, photomask; and PS, periscope.

Close modal

The slit width and magnification factor of the monochromator determine the thickness of a monochromatic light sheet, which varies little with wavelength and was measured to be 79 µm [full width at half maximum (FWHM)] at the waist (see Fig. S3). The thickness was also measured at various axial positions, and a fit to Gaussian-beam theory suggested a Rayleigh range of xR = 7.4 mm and a confocal parameter of 14.8 mm.33 The depth span of the sheets was limited to 10 mm by placing a 10-mm wide slit near the focal plane of CM (not shown in Fig. 1), corresponding to a wavelength range of 450–740 nm in the illumination. With a sheet width of 40 mm, the illumination component provided an imaging volume of 14.8 × 40 × 10 mm3 [see Fig. 1(b)]. It should be noted that, due to the prism’s nonlinear dispersion, the generated light sheets were not equally spaced according to their wavelengths (see Fig. S2). The locations of the light sheets were measured using a calibration procedure (detailed in Sec. II D), and the results agreed well with the ray tracing analysis (Fig. S4).

When a transparent object is placed in the rainbow sheets, it scatters photons of various wavelengths, depending on the depths of its interior structures. The scattered photons that encoded depth information in wavelength were collected from an orthogonal direction using a triple-view hyperspectral imager, which first transformed the spatial 3D scene into a hyperspectral image using chromatic relay optics. The light path in the relay optics was bifurcated, with photons delivered into two measurement channels via a beam splitter (10R/90T), BS1. In each channel, a biconvex lens (GLA12-025-100-A, Hengyang Optics, Inc.), BL2 or BL3, established a 4f configuration with the chromatic lens (GLH31-025A-200-VIS, Hengyang Optics, Inc.), AL2, as shown in Fig. 1(c). The chromatic and biconvex lenses had focal lengths of 200 and 100 mm, respectively, producing a magnification factor of 0.5. As the rainbow sheets were stacked along the z-axis, short-wavelength light sheets illuminated shallow depths of a sample, while longer-wavelength light sheets illuminated deep depths. For this reason, chromatic aberration was purposefully introduced into the relay optics by using a biconvex lens for image formation, resulting in a deeper focal depth with increased illumination wavelength [see Fig. 1(c)]. Despite the fact that both the dispersion of the monochromator and the chromatic aberration of the relay optics were nonlinear with wavelength (Fig. S2), the ray tracing analysis showed that the focal plane of the relay optics and the light-sheet plane were precisely aligned at all wavelengths (Fig. S4). As a result, structures within an extended DOF were sharply focused at the same intermediate image plane—that is, the chromatic relay optics effectively converted a spatial 3D scene, under illumination of the rainbow sheets, into two duplicated hyperspectral scenes [Fig. 1(c)]. The chromatic-aberration-induced DOF extension was validated by photographing a name card obliquely placed in the rainbow sheets. In contrast to white light illumination, which blurred parts of the card, rainbow-sheet illumination produced a sharp image across the entire FOV (Fig. S5).

One of the duplicated hyperspectral scenes was recorded directly using a complementary metal–oxide–semiconductor camera (12-bit, MV-CH120-10UM, HIKROBOT), CMOS 1, and the resulting measurements were equivalent to the acquisition of a front view of the 3D object. The other duplicated hyperspectral scene was captured by a modified coded aperture snapshot spectral imaging (CASSI) system in which two gratings were used to disperse the scene along two orthogonal directions, allowing for the acquisition of two distinct projections. The enhanced version, like the original CASSI approach, used a binary photomask to spatially modulate the hyperspectral scene at the entrance.24,34 The photomask was made up of 250 × 250 binary elements that were lithographically fabricated on a chrome coating on a quartz substrate, with each square element measuring 20 µm. The transmittances of the elements were set at random to satisfy the Bernoulli distribution with a probability of p = 0.5, with zeros representing opaque and ones representing transparency. The advanced CASSI system included two dispersion channels separated by a beam splitter, BS2. Each channel adopted a 4f configuration consisting of a pair of achromatic lenses (AL3 and AL4, or AL3 and AL5, f = 100 mm) and a ruled reflective diffraction grating (G1 or G2, 150 grooves/mm, Edmund Optics, Inc.). One channel dispersed the spatially modulated hyperspectral scene along the x-axis with G1 and projected it onto the second camera, CMOS 2. The compressed measurements made in this channel were equivalent to acquiring an azimuth view of the data cube (see Fig. S6). The other channel of the CASSI adopted a nearly identical design, with the exception that a periscope, PS, was inserted into the path to rotate the coded scene by 90° before dispersing it, which was equivalent to shearing the original scene along the y-axis prior to projecting it onto the third identical camera, CMOS 3. The compressed measurements made in this channel amounted to acquiring an elevation view of the data cube (see Fig. S6). It should be emphasized that, while the triple-view compressive hyperspectral imager allowed for the acquisition of three different projections of an object, it practically captured the scene from only one orthogonal direction, which distinguishes it from optical projection tomography and becomes very useful when the system is scaled down for microscopic volumetric imaging. Another aspect to mention is that, while the rainbow-sheet illumination allowed for a 2D FOV of 14.8 × 40 mm2, the practical FOV was limited by the dimensions of the photomask’s aperture, which in this work was 10 × 10 mm2.

A forward model, as detailed in the supplementary material, can characterize the compressed measurement process. Mathematically, the input 3D scene, r(x, y, z), is reproduced into three replicas, with one replication integrated along the z-axis to create a front view, E1. The remaining two replicas are spatially encoded, sheared along the x- and y-axes, and depth integrated, providing the azimuth and elevation views, E2 and E3, respectively. Following proper discretization and vectorization of the input 3D scene, the three projection measurements can be formularized in a simple matrix–vector form as follows:
E=E1E2E3=IISxCISyCr=Or.
(1)
Here, r represents the spatial distribution of the input 3D scene (in terms of scattering strength/diffuse reflectivity) and E concatenates the different types of compressive measurements. The operators C, S, and I denote the spatial encoding, lateral shearing, and depth integration, respectively, and O is a concatenated operator matrix. The subscripts indicate the shearing direction. A schematic diagram depicting the image formation and reconstruction can be found in Fig. S6. According to the compressive sensing theory, the introduction of randomness into the sensing matrix, O, permits it to satisfy the restricted isometry property (RIP) with high probability, ensuring reliable reconstruction at low compression ratios.35 
To faithfully recover the volumetric image, r̂, we employed l1 minimization of the discrete cosine transform (DCT) coefficients at a 2D patch level—that is, every local 2D patch of the volume was assumed to be sparse in the DCT basis,
r̂=argminrkDRkr1,s.t.E=Or,
(2)
where Rk is an operator extracting the kth patch of the volume and D is the 2D DCT transformation. In this work, a patch size of 8 pixels with 2-pixel overlap was used. By introducing a regularization parameter γ, this constraint optimization problem can be converted into an unconstraint one,
r̂=argminr12EOr22+γkDRkr1,
(3)
which was solved by a two-step iterative shrinkage/thresholding (TwIST) algorithm.36 The reconstructed data cube had 250 rows and columns determined by the photomask elements and 230 slices determined by the hyperspectral imager’s number of spectral channels. Reconstructing a data cube of 250 × 250 × 230 voxels on a standard laptop with one 1.8-GHz dual-core processor (Intel, Xeon) takes about 1 h; this speed can be improved in the future using parallel algorithms. The reconstruction code is available online in the supplementary material.

Accurate 3D reconstruction necessitates precise calibration of the operator matrix O. Assuming that the binary photomask was ideal and that both the chromatic relay optics and the compressive hyperspectral imager were spatially shift-invariant, the calibration procedure was streamlined to determine the shearing operator, S. Furthermore, as the azimuth- and elevation-view channels were identical except for the dispersion direction, calibration was performed only in the azimuth-view channel. For the facilitation of the calibration, a vertical line mark (20 µm wide) was also fabricated on the photomask to align with the first column of the coding elements (Fig. S7). A vertically oriented stainless string (50 µm in diameter) was then placed in the rainbow sheets and translated along the z-axis without a lateral shift, which was guaranteed if its intermediate image on the photomask was always aligned with the vertical line mark. The string was translated by a step motor (PK545NBW, Vexta) with a step size of 100 µm, and its images were acquired in the azimuth-view channel, as shown in Fig. S7(b). After locating the centerlines of these images, an interpolation provided the shearing distance of each depth slice. The line mark also served as a slit for spectrum measurement, from which each depth slice’s illumination wavelength was determined, as shown in Figs. S7(c) and S7(d).

To show the proposed method’s depth-resolved imaging capacity, we first photographed the surface of a sample made up of three glass slides stacked in the shape of a staircase. As shown in Figs. 2(a) and 2(b), three pieces of paper with white fonts on a black background were pasted to its treads, and the heights of its first and second steps were measured 2.00 and 1.10 mm, respectively. The sample was placed in the rainbow sheets in such a way that the slide surfaces formed a 5° angle to the x-axis [Fig. 2(b)]. As a result, the three words were located at various depths and diffusively reflected light of different wavelengths [Fig. 2(c)]. The compressed images of the sample were acquired in the front-, azimuth-, and elevation-view channels at exposure times of 400, 800, and 800 ms, respectively. The 3D reconstruction of the sample is shown in Fig. 2(d) (Multimedia view), with the colors representing the illumination wavelengths.37 Obviously, the reconstructed 3D image accurately retrieved both the spatial organization and the illumination colors of these words. Slices at selected depths are also shown in Fig. 2(e), in which the words at different z-locations were all perfectly focused, taking advantage of the chromatic aberration compensation in the relay optics. Furthermore, no out-of-focus blurring from other words was detected in any of the slices, demonstrating that the developed system possessed superior optical sectioning capabilities. Based on the reconstruction, the depth differences between the neighbor words were determined to be 2.35 and 1.30 mm, respectively, and were in good agreement with the step heights after accounting for the sample’s minor tilt angle.

FIG. 2.

Depth-resolved surface imaging of a staircase-shaped sample with the rainbow-sheet imaging system. (a) Photographic image of the sample taken with a single-lens reflex (SLR) camera (EOS Rebel T3i, Canon) under white light illumination. (b) Sample amounting. (c) Photographic image of the staircase-shaped sample taken with the SLR camera under rainbow-sheet illumination. (d) 3D reconstruction of the sample. Multimedia view available online. (e) Reconstructed slices of the sample at z = 1.00, 4.14, 6.49, and 7.79 mm, respectively. The scale bar is 2 mm, and the location of the light sheet with a wavelength of 450 nm is referred to as z = 0 mm.

FIG. 2.

Depth-resolved surface imaging of a staircase-shaped sample with the rainbow-sheet imaging system. (a) Photographic image of the sample taken with a single-lens reflex (SLR) camera (EOS Rebel T3i, Canon) under white light illumination. (b) Sample amounting. (c) Photographic image of the staircase-shaped sample taken with the SLR camera under rainbow-sheet illumination. (d) 3D reconstruction of the sample. Multimedia view available online. (e) Reconstructed slices of the sample at z = 1.00, 4.14, 6.49, and 7.79 mm, respectively. The scale bar is 2 mm, and the location of the light sheet with a wavelength of 450 nm is referred to as z = 0 mm.

Close modal

A number of factors contributed to the depth resolution of the rainbow-sheet imaging method. First, the thickness of a monochromatic light sheet, ∼79 µm (FWHM) at the waist, established a lower limit. Second, as light sheets of different wavelengths were continuously stacked together, the wavelength span per unit depth and the spectral resolvability of the hyperspectral imager were combined to be a primary factor. The depth resolution was experimentally measured by imaging two stretched stainless strings (diameter of 50 µm) separated by 250 µm (center-to-center) along the z-axis [Fig. 3(a)]. The strings were placed at z = 3.2 mm and were illuminated by light sheets of about 504 nm. The reconstructed volumetric image was projected onto the xy- and yz-planes, respectively, as shown in Figs. 3(b) and 3(c). Although indistinguishable in the xy-plane, the two strings, illuminated by the light sheets of close but different wavelengths, were resolved fairly well in the yz-plane, thanks to the powerful spectral resolvability of the hyperspectral imager. The intensity profile in the yz-projected image along the dashed line indicated that the depth resolution was about 140 µm, as shown by the FWHM of the single-string image [see Fig. 3(d)] defined with respect to the background level. Furthermore, the 3D reconstruction predicted a center-to-center distance of 240 µm between the two strings, which was in good agreement with the microscopic observation. The stretched strings were also placed on a lab-made microscope such that both were on the focal plane, and the microscopic image determined a center-to-center separation of 250 µm, as shown in the inset of Fig. 3(a).

FIG. 3.

Characterization of depth and lateral resolutions of the rainbow-sheet mesoscopic imaging system. (a)–(d) Characterization of the depth resolution by imaging two stretched stainless strings separated by 250 µm along the z-axis. (a) Photographic and microscopic (inset) images of the strings. (b) and (c) Reconstructed image of the strings after projection onto the xy- and yz-planes, respectively. The scale bars are 2 mm. (d) Intensity profile along the dashed line in (c). (e)–(g) Characterization of lateral resolution by imaging a 1951 USAF resolution test chart placed in the rainbow sheets at a 45° tilt angle. (e) Reconstructed image of the test chart after projection onto the xy-plane. (f) and (g) Intensity profile along the horizontal and vertical dashed lines in (e), respectively.

FIG. 3.

Characterization of depth and lateral resolutions of the rainbow-sheet mesoscopic imaging system. (a)–(d) Characterization of the depth resolution by imaging two stretched stainless strings separated by 250 µm along the z-axis. (a) Photographic and microscopic (inset) images of the strings. (b) and (c) Reconstructed image of the strings after projection onto the xy- and yz-planes, respectively. The scale bars are 2 mm. (d) Intensity profile along the dashed line in (c). (e)–(g) Characterization of lateral resolution by imaging a 1951 USAF resolution test chart placed in the rainbow sheets at a 45° tilt angle. (e) Reconstructed image of the test chart after projection onto the xy-plane. (f) and (g) Intensity profile along the horizontal and vertical dashed lines in (e), respectively.

Close modal

We also characterized the system’s lateral resolution by imaging a 1951 USAF resolution test chart lithographically fabricated on a chrome coating on a glass substrate. The test chart was positioned at a 45° angle to the x-axis, and the light reflected from its patterns was collected. The reconstructed image was projection onto the xy-plane and is shown in Fig. 3(e), in which the horizontal scale had been enlarged by a factor of 2 to account for the chart orientation. As shown, the horizontal bars in the fifth element (G3, E5) and the vertical bars in the third element (G3, E3) of the third group could still be resolved. The intensity profiles along the horizontal and vertical dashed lines, shown in Figs. 3(f) and 3(g), respectively, further backed up the conclusion. It did not imply that the resolution along the x-axis was worse than that along the y-axis because the chart was obliquely placed, and so the gap between the vertical bars appeared smaller in the xy-plane when compared to their horizontal counterparts in the same group and element. In fact, the lateral resolution was isotropic because the azimuth- and elevation-view channels used identical gratings that sheared the 3D scene equally along the x- and y-axes. The fifth element of the third group predicted a lateral resolution of 12.7 lines per millimeter (lp/mm), implying that the lateral resolution of the present system was limited by the photomask’s element size (20 µm), as the bar width and gap of this element were 39.4 µm, and the relay optics had a magnification factor of 0.5. The lateral resolution could be improved by reducing the photomask’s element size but at the expense of increased reconstruction burden, and it is ultimately limited by diffraction.

Since the inverse problem to Eq. (1) is substantially underdetermined, any reconstruction artifacts will also degrade the lateral and axial resolutions, which, however, can be mitigated by exploring a sparser representation of the 3D scene and employing an optimal reconstruction algorithm. Moreover, strong uniform scattering in a sample causes illumination photons to be scattered to unintended planes where they are re-scattered, resulting not only in defocused background but also in ambiguity in the reconstruction algorithm’s prediction of the detected photons’ depths. As a result, outstanding lateral and axial resolutions are only achievable in transparent or translucent samples with mild uniform scattering.

The proposed method, like classic light sheet, is highly useful for volumetric imaging of transparent specimens, despite the fact that its image formation is based on elastic scattering rather than fluorescence emission. To demonstrate this capability, we imaged a live jellyfish of Clytia hemisphaerica, whose tentacular system is an excellent experimental model for studying the developmental mechanisms that regulates cell lineage progression.38 This species has a translucent, saucer-shaped bell and four radial canals that connect the ring canal with the gastro-vascular cavity. One gonad is attached onto each radial canal, located closely to the ring canal. Tens of tentacle bulbs are located at the bell margin, forming the bases of tentacles. Jellyfishes of this species are almost transparent except for the gonads, gastro-vascular cavity, and the tentacle bulbs, as shown in Fig. 4(a).

FIG. 4.

Volumetric imaging of a jellyfish of Clytia hemisphaerica using the rainbow-sheet mesoscopic imaging system. (a) Photographic image of the jellyfish. (b) A typical volumetric reconstruction of the jellyfish. Multimedia view available online. The rendering colors represent the illumination wavelengths and encode depth information. (c) Dynamic volumetric imaging of the jellyfish while it was swimming. MIP images from various perspectives are exhibited at four separate time points. The scale bars are 2 mm, and the arrows in (c) show the bending and stretching of one gonad.

FIG. 4.

Volumetric imaging of a jellyfish of Clytia hemisphaerica using the rainbow-sheet mesoscopic imaging system. (a) Photographic image of the jellyfish. (b) A typical volumetric reconstruction of the jellyfish. Multimedia view available online. The rendering colors represent the illumination wavelengths and encode depth information. (c) Dynamic volumetric imaging of the jellyfish while it was swimming. MIP images from various perspectives are exhibited at four separate time points. The scale bars are 2 mm, and the arrows in (c) show the bending and stretching of one gonad.

Close modal

The mature jellyfish, which had a bell diameter of around 1 cm, was purchased from a local aquarium store. The jellyfish was placed in an ultraviolet fused quartz cuvette, which was filled with sea water prepared with specific sea salt (Beijing Yueguang Jellyfish, Inc.). The cuvette had a path length of 15 mm, allowing the jellyfish to swim freely inside it. The rainbow-sheet mesoscopic imaging system was first used to take a snapshot of the jellyfish, and the reconstructed volumetric image is shown in Fig. 4(b) (Multimedia view). The exposure time used for the compressed measurements was 50 ms, which was short enough to capture a snapshot of the jellyfish while it was swimming. As shown, the volumetric imaging technique clearly revealed the slender gonads, the dot-like tentacle bulbs, and the star-shaped gastro-vascular cavity. The wide illumination-wavelength range, indicated by the rendering colors, revealed that these organs were positioned at various z-locations and spanned a scale of around 10 mm.

The rainbow-sheet imaging method, endowed with a snapshot feature, enables the capturing of fast dynamics in transparent biological objects. Next, we captured a sequence of images of the jellyfish while it was swimming. The compressed images were acquired with an exposure time of 50 ms and a time interval of 67 ms, yielding an imaging rate of 15 volumes per second. The volumetric images were then reconstructed, and the maximum intensity projections (MIPs) at four representative time points are depicted in Fig. 4(c). An animation including the MIPs at all times to show the entire swimming processes of the jellyfish is provided in the supplementary material. It is obvious that the jellyfish’s gonads and radial canals were bent and stretched repeatedly during swimming. In particular, the gonad, as marked by the arrows in the side-view MIPs, was bent at t = 1.73 and 2.67 s, and the ring canal (invisible) was contracted, as evidenced by the compact organization of the tentacle bulbs. When the gonad was stretched at t = 2.20 and 3.13 s, both the gaps between the bulbs and the diameter of the bulb ring were increased dramatically. Moreover, the radial canals dragged the ring canal into a pillow-like shape when they were stretched. It should be noted that the bending and stretching of the radial canals were inadequately captured by the front and top views, indicating the necessities of volumetric imaging.

The volumetric imaging speed was limited by the frame rate of the CMOS cameras (15 frames/s) used in this work, which can be increased in the future by replacing them with high-speed cameras. For example, cutting-edge scientific CMOS cameras will enable imaging at speeds of more than 100 volumes per second. Furthermore, while this work demonstrated snapshot volumetric imaging with a mesoscopic system, scaling the system for microscopic imaging of transparent biological samples is straightforward. To illuminate a microscopic object, it simply requires replacing the monochromator’s long-focal-length concave mirror with a low numerical aperture objective, producing light sheets with a thinner thickness, a shorter Rayleigh range, and a shallower depth range. A 4× objective with a numerical aperture of 0.1, for example, will produce light sheets that are ∼5.5 µm thick at the beam waist (diffraction limited FWHM) with a confocal parameter of 260 µm.39 In order to convert a 3D microscopic scene into a hyperspectral one, the relay optics must be replaced with a microscope in which chromatic aberration may be introduced into the objective or the tube lens.

The reconstruction fidelity of the proposed method is dependent on the sample’ sparsity but not necessarily in the spatial domain, as with 3D deconvolution methods.20,40 While DCT adopted in this work is frequently used for image compression, it is not always the optimal solution. Learning a dictionary through data training usually results in better sparse representations.41,42 Deep learning algorithms, in particular, have been found to improve reconstruction accuracy43,44 and worth additional exploration in the future. The main causes of light loss in the present system were the photomask’s transmittance and the diffraction of the gratings. Because half of the coding elements were statistically opaque, a 50% light loss was expected from the photomask. Furthermore, the diffraction efficiency of the gratings utilized in this study was 50%–65% within the wavelength range of 450–740 nm. After accounting for these two impacts, the system’s total light loss was roughly 75%.

In summary, we report a novel method for single-snapshot volumetric imaging that does not need mechanical or optical scanning. This method resembles the classic light-sheet imaging in that it provides out-of-focus background rejection while maintaining camera-based acquisition speed, but instead of fluorescence emissions, it collects scattered or diffuse reflected photons. It also differs from the traditional light-sheet imaging in that it illuminates an entire volume of an object with a stack of light sheets of different wavelengths rather than exciting a single thin layer at a time. The proposed approach provides parallelized optical sectioning at over 200 depths by using a strategy of depth encoding in wavelength, lateral encoding with a random pattern, and a decoding procedure. A proof-of-concept mesoscopic system was developed for dynamic imaging of a live jellyfish, and its excellent performances were validated and characterized. The proposed method is expected to be widely used for studying the fast dynamics of transparent biological samples.

See the supplementary material for additional information. An animation including the MIPs at all times to show the entire swimming processes of the jellyfish is provided, and the MATLAB reconstruction code is also attached.

The authors acknowledge the funding support from the National Key Research and Development Program of China (Grant No. 2019YFC1605500) and the National Natural Science Foundation of China (Grant Nos. U22A20353, 81871393, and 62075156).

P.Z. and X.Z. have patent (application No. 2023111154981) pending.

The protocol for jellyfish imaging was strictly adhered to the guidelines of and granted by the Council for the Purpose of Control and Supervision of Experiments on Animals, Ministry of Public Health, Government of China.

P.Z. conceived of and designed the study. P.Z. and X.Z. performed the experiments. P.Z., X.Z., and H.Y. processed the data. P.Z. and F.G. supervised the study. All authors contributed to the writing of the manuscript.

Xuan Zhao: Data curation (equal); Software (equal); Validation (equal). Hang Yuan: Visualization (equal). Pengfei Zhang: Conceptualization (lead); Data curation (lead); Supervision (lead); Visualization (lead); Writing – original draft (lead). Feng Gao: Project administration (equal); Supervision (equal).

The data that support the findings of this study are available within this article and its supplementary material.

1.
X.
Chen
,
Z.
Zeng
,
H.
Wang
, and
P.
Xi
, “
Three-dimensional multimodal sub-diffraction imaging with spinning-disk confocal microscopy using blinking/fluctuating probes
,”
Nano Res.
8
,
2251
2260
(
2015
).
2.
J.
Huisken
,
J.
Swoger
,
F.
Del Bene
,
J.
Wittbrodt
, and
E. H. K.
Stelzer
, “
Optical sectioning deep inside live embryos by selective plane illumination microscopy
,”
Science
305
,
1007
1009
(
2004
).
3.
R. M.
Power
and
J.
Huisken
, “
A guide to light-sheet fluorescence microscopy for multiscale imaging
,”
Nat. Methods
14
,
360
373
(
2017
).
4.
O. E.
Olarte
,
J.
Andilla
,
E. J.
Gualda
, and
P.
Loza-Alvarez
, “
Light-sheet microscopy: A tutorial
,”
Adv. Opt. Photonics
10
,
111
179
(
2018
).
5.
M.
Levoy
,
R.
Ng
,
A.
Adams
,
M.
Footer
, and
M.
Horowitz
, “
Light field microscopy
,”
ACM Trans. Graphics
25
,
924
934
(
2006
).
6.
A.
Veeraraghavan
,
R.
Raskar
,
A.
Agrawal
,
A.
Mohan
, and
J.
Tumblin
, “
Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing
,”
ACM Trans. Graphics
26
,
69
(
2007
).
7.
P. J.
Keller
,
A. D.
Schmidt
,
J.
Wittbrodt
, and
E. H. K.
Stelzer
, “
Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy
,”
Science
322
,
1065
1069
(
2008
).
8.
F. O.
Fahrbach
and
A.
Rohrbach
, “
Propagation stability of self-reconstructing Bessel beams enables contrast-enhanced imaging in thick media
,”
Nat. Commun.
3
,
632
(
2012
).
9.
T.
Vettenburg
,
H. I. C.
Dalgarno
,
J.
Nylk
,
C.
Coll-Lladó
,
D. E. K.
Ferrier
,
T.
Čižmár
,
F. J.
Gunn-Moore
, and
K.
Dholakia
, “
Light-sheet microscopy using an Airy beam
,”
Nat. Methods
11
,
541
544
(
2014
).
10.
P.
Zhang
,
M. E.
Phipps
,
P. M.
Goodwin
, and
J. H.
Werner
, “
Confocal line scanning of a Bessel beam for fast 3D Imaging
,”
Opt. Lett.
39
,
3682
3685
(
2014
).
11.
E. J.
Botcherby
,
R.
Juskaitis
,
M. J.
Booth
, and
T.
Wilson
, “
Aberration-free optical refocusing in high numerical aperture microscopy
,”
Opt. Lett.
32
,
2007
2009
(
2007
).
12.
F.
Anselmi
,
C.
Ventalon
,
A.
Bègue
,
D.
Ogden
, and
V.
Emiliani
, “
Three-dimensional imaging and photostimulation by remote-focusing and holographic light patterning
,”
Proc. Natl. Acad. Sci. U. S. A.
108
,
19504
19509
(
2011
).
13.
M.
Woringer
,
X.
Darzacq
,
C.
Zimmer
, and
M.
Mir
, “
Faster and less phototoxic 3D fluorescence microscopy using a versatile compressed sensing scheme
,”
Opt. Express
25
,
13668
13683
(
2017
).
14.
G.
Calisesi
,
M.
Castriotta
,
A.
Candeo
,
A.
Pistocchi
,
C.
D’Andrea
,
G.
Valentini
,
A.
Farina
, and
A.
Bassi
, “
Spatially modulated illumination allows for light sheet fluorescence microscopy with an incoherent source and compressive sensing
,”
Biomed. Opt. Express
10
,
5776
5788
(
2019
).
15.
A.
Zunino
,
F.
Garzella
,
A.
Trianni
,
P.
Saggau
,
P.
Bianchini
,
A.
Diaspro
, and
M.
Duocastella
, “
Multiplane encoded light-sheet microscopy for enhanced 3D imaging
,”
ACS Photonics
8
,
3385
3393
(
2021
).
16.
G.
Calisesi
,
A.
Ghezzi
,
D.
Ancora
,
C.
D’Andrea
,
G.
Valentini
,
A.
Farina
, and
A.
Bassi
, “
Compressed sensing in fluorescence microscopy
,”
Prog. Biophys. Mol. Biol.
168
,
66
80
(
2022
).
17.
S.
Abrahamsson
,
J.
Chen
,
B.
Hajj
,
S.
Stallinga
,
A. Y.
Katsov
,
J.
Wisniewski
,
G.
Mizuguchi
,
P.
Soule
,
F.
Mueller
,
C. D.
Darzacq
,
X.
Darzacq
,
C.
Wu
,
C. I.
Bargmann
,
D. A.
Agard
,
M.
Dahan
, and
M. G. L.
Gustafsson
, “
Fast multicolor 3D imaging using aberration-corrected multifocus microscopy
,”
Nat. Methods
10
,
60
63
(
2013
).
18.
S.
Xiao
,
H.
Gritton
,
H.-A.
Tseng
,
D.
Zemel
,
X.
Han
, and
J.
Mertz
, “
High-contrast multifocus microscopy with a single camera and z-splitter prism
,”
Optica
7
,
1477
1486
(
2020
).
19.
A.
Orth
,
M.
Ploschner
,
E. R.
Wilson
,
I. S.
Maksymov
, and
B. C.
Gibson
, “
Optical fiber bundles: Ultra-slim light field imaging probes
,”
Sci. Adv.
5
,
eaav1555
(
2019
).
20.
R.
Prevedel
,
Y.-G.
Yoon
,
M.
Hoffmann
,
N.
Pak
,
G.
Wetzstein
,
S.
Kato
,
T.
Schrödel
,
R.
Raskar
,
M.
Zimmer
,
E. S.
Boyden
, and
A.
Vaziri
, “
Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy
,”
Nat. Methods
11
,
727
730
(
2014
).
21.
N.
Antipa
,
G.
Kuo
,
R.
Heckel
,
B.
Mildenhall
,
E.
Bostan
,
R.
Ng
, and
L.
Waller
, “
DiffuserCam: Lensless single-exposure 3D imaging
,”
Optica
5
,
1
9
(
2018
).
22.
J. K.
Adams
,
V.
Boominathan
,
B. W.
Avants
,
D. G.
Vercosa
,
F.
Ye
,
R. G.
Baraniuk
,
J. T.
Robinson
, and
A.
Veeraraghavan
, “
Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope
,”
Sci. Adv.
3
,
e1701548
(
2017
).
23.
J.
Park
,
D.
Brady
,
G.
Zheng
,
L.
Tian
, and
L.
Gao
, “
Review of bio-optical imaging systems with a high space-bandwidth product
,”
Adv. Photonics
3
,
044001
(
2021
).
24.
A. A.
Wagadarikar
,
N. P.
Pitsianis
,
X.
Sun
, and
D. J.
Brady
, “
Video rate spectral imaging using a coded aperture snapshot spectral imager
,”
Opt. Express
17
,
6368
6388
(
2009
).
25.
J. V.
Thompson
,
J. N.
Bixler
,
B. H.
Hokr
,
G. D.
Noojin
,
M. O.
Scully
, and
V. V.
Yakovlev
, “
Single-shot chemical detection and identification with compressed hyperspectral Raman imaging
,”
Opt. Lett.
42
,
2169
2172
(
2017
).
26.
H.
Yuan
,
P.
Zhang
, and
F.
Gao
, “
Compressive hyperspectral Raman imaging via randomly interleaved scattering projection
,”
Optica
8
,
1462
1470
(
2021
).
27.
L.
Gao
,
J.
Liang
,
C.
Li
, and
L. V.
Wang
, “
Single-shot compressed ultrafast photography at one hundred billion frames per second
,”
Nature
516
,
74
77
(
2014
).
28.
J.
Liang
,
L.
Gao
,
P.
Hai
,
C.
Li
, and
L. V.
Wang
, “
Encrypted three-dimensional dynamic imaging using snapshot time-of-flight compressed ultrafast photography
,”
Sci. Rep.
5
,
15504
(
2015
).
29.
P.
Ding
,
Y.
Yao
,
D.
Qi
,
C.
Yang
,
F.
Cao
,
Y.
He
,
J.
Yao
,
C.
Jin
,
Z.
Huang
,
L.
Deng
,
L.
Deng
,
T.
Jia
,
J.
Liang
,
Z.
Sun
, and
S.
Zhang
, “
Single-shot spectral-volumetric compressed ultrafast photography
,”
Adv. Photonics
3
,
045001
(
2021
).
30.
Y.
Xue
,
I. G.
Davison
,
D. A.
Boas
, and
L.
Tian
, “
Single-shot 3D wide-field fluorescence imaging with a computational miniature mesoscope
,”
Sci. Adv.
6
,
eabb7508
(
2020
).
31.
N. C.
Pégard
,
H.-Y.
Liu
,
N.
Antipa
,
M.
Gerlock
,
H.
Adesnik
, and
L.
Waller
, “
Compressive light-field microscopy for 3D neural activity recording
,”
Optica
3
,
517
524
(
2016
).
32.
P.
Llull
,
X.
Yuan
,
L.
Carin
, and
D. J.
Brady
, “
Image translation for single-shot focal tomography
,”
Optica
2
,
822
825
(
2015
).
33.
A. E.
Siegman
,
An Introduction to Lasers and Masers
(
McGraw Hill
,
New York
,
1971
).
34.
L.
Wang
,
Z.
Xiong
,
D.
Gao
,
G.
Shi
, and
F.
Wu
, “
Dual-camera design for coded aperture snapshot spectral imaging
,”
Appl. Opt.
54
,
848
858
(
2015
).
35.
M. A.
Davenport
,
M. F.
Duarte
,
Y. C.
Eldar
, and
G.
Kutyniok
, “
Introduction to compressed sensing
,” in
Compressed Sensing: Theory and Applications
, edited by
Y. C.
Eldar
and
G.
Kutyniok
(
Cambridge University Press
,
Cambridge
,
2012
), pp.
1
64
.
36.
J. M.
Bioucas-Dias
and
M. A. T.
Figueiredo
, “
A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration
,”
IEEE Trans. Image Process.
16
,
2992
3004
(
2007
).
37.
RGB to Visible Spectrum, retrieved, https://punkish.org/RGB-to-Visible-Spectrum/.
38.
T.
Condamine
,
M.
Jager
,
L.
Leclère
,
C.
Blugeon
,
S.
Lemoine
,
R. R.
Copley
, and
M.
Manuel
, “
Molecular characterisation of a cellular conveyor belt in Clytia medusae
,”
Dev. Biol.
456
,
212
225
(
2019
).
39.
A. K.
Glaser
,
N. P.
Reder
,
Y.
Chen
,
E. F.
McCarty
,
C.
Yin
,
L.
Wei
,
Y.
Wang
,
L. D.
True
, and
J. T. C.
Liu
, “
Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens
,”
Nat. Biomed. Eng.
1
,
0084
(
2017
).
40.
N.
Wagner
,
N.
Norlin
,
J.
Gierten
,
G.
de Medeiros
,
B.
Balázs
,
J.
Wittbrodt
,
L.
Hufnagel
, and
R.
Prevedel
, “
Instantaneous isotropic volumetric imaging of fast biological processes
,”
Nat. Methods
16
,
497
500
(
2019
).
41.
E. J.
Candès
,
Y. C.
Eldar
,
D.
Needell
, and
P.
Randall
, “
Compressed sensing with coherent and redundant dictionaries
,”
Appl. Comput. Harmonic Anal.
31
,
59
73
(
2011
).
42.
X.
Lin
,
Y.
Liu
,
J.
Wu
, and
Q.
Dai
, “
Spatial-spectral encoded compressive hyperspectral imaging
,”
ACM Trans. Graphics
33
,
1
11
(
2014
).
43.
A.
Bora
,
A.
Jalal
,
E.
Price
, and
A. G.
Dimakis
, “
Compressed sensing using generative models
,” in
Proceedings of the 34th International Conference on Machine Learning
, edited by
P.
Doina
and
T.
Yee Whye
(
PMLR, Proceedings of Machine Learning Research
,
2017
), pp.
537
546
.
44.
K.
Zhang
,
J.
Hu
, and
W.
Yang
, “
Deep compressed imaging via optimized pattern scanning
,”
Photonics Res.
9
,
B57
B70
(
2021
).

Supplementary Material