Recent advances in biochemistry and optics have enabled observation of the faintest signals from even single molecules. However, although biological samples can have varying degrees of fluorescence expression ranging from a single to thousands of fluorescent molecules in an observation volume, the detection range is fundamentally limited by the dynamic range (DR) of current detectors. In other words, for many biological systems where faint and strong signal sources coexist, traditional imaging methods make a compromise and end up choosing a limited target signal range to be quantitatively measured while other signal levels are either lost beneath the background noise or saturated. The DR can be extended by taking multiple images with varying exposures, which, however, severely restricts data throughput. To overcome this limitation, we introduce structured illumination high dynamic range (SI-HDR) imaging, which enables real-time HDR imaging with a single measurement. We demonstrate the wide and easy applicability of the method by realizing various applications, such as high throughput gigapixel imaging of mouse brain slices, quantitative analysis of neuronal mitochondria structures, and fast 3D volumetric HDR imaging.

The information content of an image is limited by the image size (pixel number) and pixel bit depth. Modern analog to digital converters (ADCs) in current scientific grade cameras provide up to 16-bit depth resolution, but the true DR is typically around ∼20 000:1 at best, which is limited by the read noise and full well capacity. To overcome this limitation, methods aiming to improve hardware and software have both been developed to obtain a wider range of luminance in a single image than restricted by the intrinsic DR of a camera and, in other words, to achieve HDR imaging. One of the most widely used HDR imaging methods in photography is to reconstruct a single HDR image by fusing multiple images acquired with different amounts of exposure.1–5 This approach does not require any additional hardware and simply uses multiple image acquisitions using a single camera. However, the multiple image acquisition requirement fundamentally increases the total acquisition time needed to reconstruct a single HDR image. Furthermore, an optimization step is required to calibrate the range of exposures needed to capture the entire luminance range, which can vary widely between different target samples. To overcome such limitations in terms of temporal resolution, different approaches inducing different amount of exposures to adjacent pixels in a camera have also been developed by applying a specific mask with different transmittance filters.6,7 Although this approach enables single shot HDR imaging, custom masks must be designed for each application environment and image quality can be easily affected by noise during the reconstruction process. Grouping multiple adjacent pixels also inherently reduces the resolution of the system.

HDR imaging holds valuable information not only in photography but also for microscopy. Especially in bio imaging, subcellular organelles of interest are often labeled with organelle-targeting peptide tagged fluorescent proteins.8–12 Although the fluorescent intensity holds quantitative information about the molecular content, in many imaging experiments, numerous regions of interest (ROIs) of an image are unavoidably saturated to visualize weakly fluorescing structures, which restricts further quantitative analysis. To overcome this limitation, controlling the illumination has been shown to be a viable option to extend the DR in fluorescence imaging. For example, controlled light exposure microscopy (CLEM) realized laser scanning microscopy with real-time negative feedback to modulate the illumination laser power using an electro optical modulator.13–17 Contrary to conventional laser scanning microscopes, which apply the same focused laser beam dosage for the entire field of view (FOV), CLEM actively controls the laser exposure time of illumination light to avoid saturation in strongly fluorescent structures. Other real-time HDR imaging methods based on adjusting the detection levels have also been developed by introducing multiple beam splitters in the detection arm of laser scanning microscopes to acquire different range of signals simultaneously.18–20 Although this method enables real-time HDR imaging by adding several simple detection optics, DR is extended by blocking signals using absorption filters, which unavoidably wastes precious fluorescence photons.

In another approach, directly modulating the illumination or detection light is also an attractive solution as spatial light modulators with independently controllable pixels can be used. For example, HDR images can be obtained by applying binary masks in the detection path against saturated areas.21–23 A real-time HDR method developed by Nayar et al. used a liquid crystal display as an attenuator in front of the imaging lens to adaptively respond to camera refresh rates up to 30 Hz.24,25 However, the relatively slow refresh rates of liquid crystal spatial light modulators limit their use where fast imaging is required. As liquid crystal spatial light modulators only work for a single polarization axis, it is also undesirable to use them in the detection path in fluorescence imaging where half of the precious fluorescence signal is lost. Furthermore, elaborate control of light intensity at single pixel level accuracy, which is required for high resolution microscopy, could not be implemented in this method due to nonoptimal location of the attenuator. Recently, digital micro-mirror devices (DMDs) have shown great potential to enable adaptive light modulation26 for HDR imaging with their fast refresh rates up to ∼20 kHz.27 Recent works have successfully demonstrated wide luminance imaging using DMDs.28–31 However, these works focused only on the suppression of saturation and still required multiple image acquisitions to reconstruct a single HDR image, which limits applications to dynamic imaging. Furthermore, although saturation suppression was successfully realized, quantitative HDR reconstruction along the entire extended DR was not demonstrated.

Here, we demonstrate structured illumination HDR (SI-HDR), a fast customizable HDR imaging method using a DMD to modulate the illumination intensity on a pixel-by-pixel basis for spatially selective excitation that adapts in real-time to the fluorescence distribution of the object. By using the DMD for illumination modulation, the entire fluorescence photon budget is utilized in detection as in conventional widefield fluorescence imaging. The DR of the reconstructed HDR image is increased by a factor given by the illumination bit-depth of the DMD analog intensity resolution (8-bit). In contrast to conventional HDR imaging where multiple images are acquired with varying exposures, the analog illumination pattern is dynamically modulated on a pixel-by-pixel basis for each image acquisition increasing the signal to noise ratio (SNR) of all image pixels and enabling quantitative analysis for both weak and intense signals simultaneously. The large dynamic range and high SNR combined with fast imaging enables high throughput imaging of high quality large area datasets, e.g., acquiring large FOV (8.285 × 5.623 mm2) subcellular resolution macroscopic HDR images in exactly the same total acquisition time as conventional widefield imaging. We also demonstrate the importance of HDR imaging in neuronal networks by observing fine mitochondria structures in neuronal cells, where the simultaneous observation of dense/large mitochondria populations in the soma and sparse/small mitochondria in the axon enables automatic segmentation, which is not possible via conventional imaging. Furthermore, the method is broad and general and can be applied to all types of widefield illumination imaging methods. For example, we demonstrate the application of SI-HDR, where we obtain depth sectioned HDR volumes using just two image frames for each acquired plane.32–34 

In fluorescence imaging, the emitted fluorescence intensity I(r) is given by

(1)

where O(r),P, and PSF are the object’s fluorescence distribution, illumination intensity, and point spread function, respectively. For the usual case where P is spatially homogeneous, the emission is proportional to the object’s fluorescence distribution. Unfortunately, in many biological microscopic imaging scenarios, O(r) can easily extend over a broad range. Thus, we can express O(r) as a sum of different areas with relatively low, intermediate, and high fluorescence densities, r1,r2, and r3, respectively,

(2)

Incident photons on the detector generate photoelectrons proportional to the quantum efficiency of the detector. Here, the maximum number of photoelectrons containable in a single pixel is given by the full well capacity. The number of photoelectrons is then digitized by an ADC, which makes up the two-dimensional image that we acquire. Modern cameras have high bit-depth ADCs, but the limited full well capacity and the baseline noise floor set the limit on the dynamic range of the acquired images. When the detected light intensity is too high, the measured signal is saturated. On the other hand, when the detected light intensity is too low, the signal is overwhelmed by the read and dark noise. Using cooled scientific grade cameras, the dark noise can become negligible in most experiments and the read noise is the dominant source, deciding the noise floor. Substituting Eq. (2) to Eq. (1), I(r) can be expressed as

(3)

We can see that I(r1) represents ROIs dominated by read noise, I(r2) represents regions within the dynamic range, and I(r3) represents ROIs that have been saturated. In areas r1 and r3, the linearity between the emitted fluorescence and detected signal is no longer valid forbidding quantitative analysis. However, if P can be arbitrarily modulated as a function of space, the measured image is now expressed as

(4)

In contrast with I(r) acquired by homogeneous illumination P, morphological features at r1andr3 can now be retrieved simultaneously with either higher or lower illumination intensities in ISI(r), which now falls safely with the limited DR of the camera. To quantify the object’s true fluorescent distribution O(r)HDR, the effect of the spatially varying illumination pattern can be normalized to obtain

(5)

Figure 1 illustrates the principle of our SI-HDR method. Here, neuronal structures with varying fluorescence densities were simulated to test the validity of SI-HDR. First, an image of neuronal structures was first taken within the standard dynamic range of the camera. We next altered the intensity distribution in a nonlinear fashion by making a simulated ground truth image defined as GT(r)=exp(0.00008I(r)), where I(r) is the experimentally measured image, and GT(r) is the generated simulated image with an intentionally exaggerated nonlinear distribution of fluorescence. GT(r) was used as the relative emission yield for uniform excitation and a camera quantum efficiency of 82%, a read noise median (rms) of 1.10 electrons, a full well capacity of 30 000 electrons, a gain of 5.88 ADU (analog-to-digital unit), and a baseline of 100 ADU were used for the simulations. Dark current was assumed to be negligible using cooled cameras.

FIG. 1.

Numerical simulation results of SI-HDR. (a) Short exposure and (b) long exposure images of neurons in a mouse brain slice. Here, the long exposure image corresponds to an exposure time 5 times longer than the short exposure. Saturated pixels are shown in red. (c) and (d) Log-scale images of Figs. 1(a) and 1(b), respectively. (e) SI-HDR illumination pattern optimizing fluorescence emission distribution. (f) SI-HDR acquired raw data. Reconstructed HDR image in (g) linear and (h) log scale, using the data in Figs. 1(e) and 1(f). We can see that the fully recovered HDR range goes well beyond the 8 bit-depth resolution of common monitors or printers. Scalebar: 20 µm.

FIG. 1.

Numerical simulation results of SI-HDR. (a) Short exposure and (b) long exposure images of neurons in a mouse brain slice. Here, the long exposure image corresponds to an exposure time 5 times longer than the short exposure. Saturated pixels are shown in red. (c) and (d) Log-scale images of Figs. 1(a) and 1(b), respectively. (e) SI-HDR illumination pattern optimizing fluorescence emission distribution. (f) SI-HDR acquired raw data. Reconstructed HDR image in (g) linear and (h) log scale, using the data in Figs. 1(e) and 1(f). We can see that the fully recovered HDR range goes well beyond the 8 bit-depth resolution of common monitors or printers. Scalebar: 20 µm.

Close modal

In conventional HDR imaging, different ranges of luminance are captured by sequentially taking short and long exposure images under homogeneous illumination. When the object of interest emits a broad range of light intensities, such as brain tissue including soma and tiny spines, the short exposure image fails to detect faint structures while the long exposure image is saturated in strong light emitting regions, as shown in Figs. 1(a)1(d). To optimize the HDR reconstruction, exposures for each acquisition must be optimized and in practice, 3 to 4 different images are usually taken for solid reconstruction. In contrast, the object of interest in SI-HDR is illuminated by a customized distribution of light intensity distribution that has the shape of the object but an inverse distribution of the object’s original intensity distribution. By taking an image with this SI, a single image contains detailed morphological information beyond the limited dynamic range of the detector. Examples of such a structured illumination pattern and the captured image are shown in Figs. 1(e) and 1(f ), respectively. Since we know the SI pattern that was illuminating the sample, the true fluorescence distribution of the object can be easily recovered using Eq. (5) as shown in Figs. 1(g) and 1(h). The recovered image clearly shows morphological structures that were previously concealed either by read noise or saturation in Figs. 1(a)1(d) (see Fig. S1 for quantitative analysis using fluorescent beads of largely varying differing intensities).

To illustrate the strengths and general applicability of SI-HDR in multimodal imaging applications, we built a DMD-based widefield illumination system as shown in Fig. 2. We employed high power blue and green LEDs (SOLIS-470C, LED4D254, Thorlabs) with appropriate filter sets [ET525/50m, ZT488rdc-UF3, and ET500lp for green fluorescent protein (GFP) and yellow fluorescent protein (YFP), and ET539/21x, ZT568rdc-UF1, and FF01-607/70-25 for mCherry] for fluorescence imaging. A DMD (1920 × 1080 mirrors with 7.56 µm pitch, DLP6500EVM, Texas Instruments) was positioned at a conjugate image plane to directly modulate the illumination pattern intensity. The resulting fluorescence emission was detected by a sCMOS camera (2048 × 2048 pixels with 6.5 µm pitch, Zyla 4.2, Andor) with the PreAmpGainControl option set to low noise mode (12-bit). For diffraction-limited accuracy in modulating the SI-HDR illumination pattern, image registration was carried out to obtain pixel-level alignment between the DMD illumination pattern and the acquired images on the camera (Fig. S2).

FIG. 2.

Optical schematic of the SI-HDR imaging system. Illumination intensity patterns on a DMD were updated in real-time (60 Hz) to follow sample scanning. M, mirror; DMD, digital micro-mirror device; L, lens; TL, tube lens; DM, dichroic mirror; OBJ, objective lens; EF, emission filter. Focal lengths of L1, L2, TL1, and TL2 are 400, 750, 300, and 180 mm, respectively.

FIG. 2.

Optical schematic of the SI-HDR imaging system. Illumination intensity patterns on a DMD were updated in real-time (60 Hz) to follow sample scanning. M, mirror; DMD, digital micro-mirror device; L, lens; TL, tube lens; DM, dichroic mirror; OBJ, objective lens; EF, emission filter. Focal lengths of L1, L2, TL1, and TL2 are 400, 750, 300, and 180 mm, respectively.

Close modal

To highlight the quantitative nature of SI-HDR imaging by imaging both weak (range of fluorescence signals originally below the read noise floor) and strong intrinsic signal levels (range of fluorescence signals originally saturating the camera) in a single image, we first imaged mitochondria in fixed cultured cortical neurons. Neuronal mitochondria are distributed throughout the entire cell with varying morphological features at each domain, such as soma, dendrites, and axons.35–37 In general, high densities of mitochondria with elongated shapes and large volumes can be seen in somas and proximal dendrites, generally <150 µm from the soma, which results in bright signal levels in fluorescence images. On the other hand, axonal mitochondria and distal dendritic mitochondria, located >250 µm from the soma, are comparably shorter with smaller volumes and are sparsely located, thus emitting much lower levels of fluorescence.35–39 Recent studies have shown that the morphological structure of mitochondria along the neuron can help us understand the progress of neurodegenerative diseases, such as Alzheimer’s and Parkinson’s diseases.40–42 However, to visualize the mitochondria over the entire cell simultaneously, which emit different amounts of fluorescence, previous research resorted to using higher illumination intensities or longer exposure times, which saturates the mitochondrial signal in the soma and dendrites.43,44

Here, we demonstrate that SI-HDR can provide an answer to these difficulties and enable the quantification of morphological subcellular features and the density of mitochondria in entire neurons. We performed SI-HDR on cultured cortical pyramidal neurons where mitochondria were labeled with YFP. To identify axons vs dendrites and verify the image enhancement, we also loaded the pyramidal neurons with mCherry as a filler so that axons can be identified by thin processes and the lack of spines. Figure 3 shows the imaging results for conventional imaging and SI-HDR. In conventional imaging [Fig. 3(a)], weak homogeneous illumination enables the detection of mitochondria in the soma and proximal dendrites, but fluorescence signals from axonal and distal dendritic mitochondria could not be sufficiently captured. On the other hand, strong homogeneous illumination successfully visualizes weakly fluorescing mitochondria in the axon and distal dendrites, as shown in Fig. 3(b), but results in saturation for mitochondria in the soma and proximal dendrite regions. In stark contrast, fluorescence signals can be quantitatively measured from all regions of the neuron regardless of the size and density of mitochondria using SI-HDR [Fig. 3(c)]. All results in Fig. 3 are shown after applying gamma correction by a factor of 0.3 for visibility on a standard monitor or printer. Large densely populated mitochondria in the soma and smaller mitochondria sparsely populated in the axon and distal dendrites could be quantitatively observed simultaneously only in the SI-HDR images [Figs. 3(c) and 3(d)]. We used previously reported mitochondrial segmentation methods for automatic structure segmentation.18,45 Briefly, the images were normalized to have values distributed from 0–1 in double precision. The normalized images were then convolved with a Laplacian kernel to enhance the edges of fluorescent structures (mitochondria). The boundaries of the mitochondria were then binarized using the Matlab function “imbinarize.” The segmentation results were finally human-checked by two neurobiologists to classify the respective locations in the soma, dendrites, and axon. During this step, additional manual segmentation for somatic mitochondria was performed due to the irregular shapes that were not successfully segmented via automatic image processing alone. Based on the segmentation, we found that mitochondria in all neuronal regions, including soma, dendrites, and axons, were clearly identified only in SI-HDR images [Fig. 3(e)]. We next classified the segmented mitochondria with respect to the location of each individual mitochondrion and visualized them with different colors as shown in Fig. 3(f ). We could verify dense mitochondria with complex structures in soma (cyan), sparsely distributed large mitochondria in dendrites (green), and thin and short mitochondria in axons (orange), corresponding to high, intermediate, and low signal intensities in SI-HDR images, respectively (signal levels defined as follows: high: 1–0.85, intermediate: 0.85–0.03, and low: <0.03, where 1 corresponds to the maximum measured intensity). Long and large mitochondria emit stronger fluorescence compared to short and tiny mitochondria in general. Thus, conventional imaging using either weak or strong illumination resulted in lower counts of segmented mitochondria with comparably smaller or larger sizes, respectively [Fig. 3(g)]. In comparison, SI-HDR image analysis resulted in the highest total mitochondrial count as all mitochondria were measured with sufficient signal-to-noise ratio (SNR) irrespective of the location of each mitochondrion. We can also see that the total number of smaller mitochondria agrees well for the strongly illuminated and SI-HDR images, whereas the total number of larger mitochondria agrees well for the weakly illuminated and SI-HDR images, as expected. The intermediate-size mitochondria number agreed overall for all images except for the weak illumination image, which incorrectly identified a slightly smaller number of mitochondria in the dendrites.

FIG. 3.

Mitochondrial segmentation in cultured neurons. Mitochondria labeled with YFP were imaged with varying illumination methods, while the neuronal cytoplasm labeled with mCherry was obtained using a single constant illumination and overlayed. Images obtained with homogeneous illumination with intensities (a) just below saturation, (b) above saturation, and (c) SI-HDR. (d) Magnified mitochondria images, and (e) segmentation results of the square regions marked in Fig. 3(c) with cyan and green dotted squares. Whereas low and high intensity homogeneous illuminations result in successful segmentation for the somatic, and distal dendrite regions, respectively, SI-HDR permits successful segmentation for all regions. (f) Mitochondria segmentation results in Fig. 3(c), classified by the location of each mitochondrion. (g) Segmented mitochondria counts per the location. Respective signal intensity histograms in (h) linear and (i) log scales. Scalebar: 20 and 10 µm in (a)–(c), and (d), respectively.

FIG. 3.

Mitochondrial segmentation in cultured neurons. Mitochondria labeled with YFP were imaged with varying illumination methods, while the neuronal cytoplasm labeled with mCherry was obtained using a single constant illumination and overlayed. Images obtained with homogeneous illumination with intensities (a) just below saturation, (b) above saturation, and (c) SI-HDR. (d) Magnified mitochondria images, and (e) segmentation results of the square regions marked in Fig. 3(c) with cyan and green dotted squares. Whereas low and high intensity homogeneous illuminations result in successful segmentation for the somatic, and distal dendrite regions, respectively, SI-HDR permits successful segmentation for all regions. (f) Mitochondria segmentation results in Fig. 3(c), classified by the location of each mitochondrion. (g) Segmented mitochondria counts per the location. Respective signal intensity histograms in (h) linear and (i) log scales. Scalebar: 20 and 10 µm in (a)–(c), and (d), respectively.

Close modal

We next quantified the signal intensity distribution in Figs. 3(a) and 3(c), in linear and log-scaled histograms, as shown in Figs. 3(h) and 3(i), respectively. In the linear scaled histogram [Fig. 3(h)], seemingly little difference was observed between the two images. However, observing the log-scaled histogram [Fig. 3(i)], we can see that the SI-HDR image has no saturated pixels and also contains a drastically larger number of pixels ranging from 0 to 5-bit signal levels that were successfully measured above the noise floor (electronic read noise of 25) compared to when using conventional homogeneous illumination. Defining the dynamic range as 20log10(ImaxImin), where Imax and Imin are the maximum and minimum measurable values in an image; the dynamic range showed a 36.12–78.27 dB improvement for conventional and SI-HDR imaging of neuronal mitochondria. In addition, the higher SNR over the entire FOV results in successful segmentation of mitochondria across different neuronal regions, enabling quantification of the number, size, and signal intensity according to each mitochondrial location [Fig. 3(f )].

Investigating the morphofunctional connectivity of neuronal networks can provide important information for understanding the neuropathology of various neurodegenerative diseases and psychiatric disorders.46–48 Image analysis of brain slices from animal models is therefore a gold standard for understanding the structural and cellular/subcellular changes of the brain in neurological disorders.46–50 To improve the accuracy of connectivity analysis, we should acquire correct morphological information about both neuronal processes and somas simultaneously. The difficulty again arises from the fact that the limited DR of conventional detectors compromises accurate signal measurements arising from such varying regions. The problem becomes even worse when imaging large volumes of brain tissue as cell type, shape, density, and the resulting fluorescence yield varies widely across the brain. As it is not trivial to set an optimal illumination intensity or exposure time for the entire brain, common practice is to induce high illumination flux to visualize weakly fluorescing structures, which results in saturation of many brain areas. Furthermore, a large acquisition time is necessary for hig- resolution gigapixel imaging of entire brain slices, which restricts the practical application of conventional HDR techniques due to their multi-acquisition nature.51–53 

SI-HDR offers a great advantage for this type of task as the imaging speed is identical to conventional widefield imaging. Although SI-HDR requires a single prior image to identify the approximate fluorescence distribution of the object, consequent imaging uses the prior SI-HDR imaging result to track the object of interest. In other words, an initial latency of one image is required, but the following actual imaging speed is not compromised, which enables the dynamic SI-HDR imaging. A similar principle was used to enable high-throughput SI-HDR mosaicking of multiple image stitches to image a whole brain slice. A sample stage (MS-2000, ASI) was used to stitch multiple different FOVs. We found that while the sample stage is moving to the next position of interest, the speed is decreased slightly before reaching and stopping at the final position. We exploited this time window during the “dead time” of sample stage scanning to realize the same imaging throughput as conventional widefield imaging. SI-HDR patterns were calculated based on an image obtained in the short time window immediately before the sample stage stopped at each tile location (see the supplementary material, Fig. S6, for detailed information regarding synchronization of the light source, sample stage, DMD, and camera). By taking advantage of the dead time during sample stage movement, SI-HDR mosaicking of an entire brain slice took exactly the same amount of time as stitching conventional widefield images, which is the fastest and easiest stitching method to date.

Using SI-HDR, we imaged a 20 µm thick coronal sectioned mouse brain slice of a Thy1-YFP mouse that was additionally labeled with GFP antibody to match our illumination source spectrum. The entire brain slice was imaged while SI-HDR automatically adapted to neuromorphological features across the whole mouse brain slice. We acquired the morphological features of a whole brain slice by combining 62 × 42 tiles in total. Each tile was composed of 1024 × 1024 pixels with an effective camera pixel size of 162.5 nm, which is below half of the diffraction limit (427 nm) of our system. The image tiles were stitched using the “Stitching” plugin in imageJ.54 Whole brain SI-HDR imaging was easily realized without any human intervention, as shown in Fig. 4. The total pixel number of the stitched image was ∼1.8 gigapixels (50 983 × 34 603). As previously mentioned, the total measurement time was the same as it takes for mosaic stitching using conventional widefield imaging (500 ms for each image, resulting in a total acquisition time of 500 ms × 62 × 42 = 1302 s). However, the information content and quality of SI-HDR far exceeds that achievable in conventional widefield imaging, where fine structures such as dendritic spines that were obscured by noise can now be clearly seen. We can easily see the large variation in signal levels across the brain slice when illuminating the entire brain with a constant homogeneous illumination; for example, soma in primary somatosensory cortex and basolateral amygdala (BLA) regions show much stronger fluorescence than other regions [Fig. 4(a)] due to the difference in Thy1-YFP expression.55,56 Here, the incident light intensity was adjusted so that the maximum fluorescence at Thy1-expressing neurons in BLA regions was measured just below saturation to fully utilize the camera dynamic range. Thus, degradation of SNR is inevitable in other regions where weaker fluorescing structures are dominant. In comparison, the acquired image using SI-HDR shows a homogeneous signal level for all regions of the entire mouse brain slice, as shown in Fig. 4(b). Therefore, all structures can be simultaneously measured up to the shot noise limit. The measured SI-HDR raw data, where all measured data points are now boosted above the noise floor, is then quantitatively normalized by the SI illumination pattern [Fig. 4(c)] and Eq. (5), as shown in Fig. 4(d). Due to the limited dynamic range of modern displays, the SI-HDR image looks similar with conventional uniform illumination imaging in linear scale. However, we could verify significant differences in the weakly fluorescing regions, such as dendritic spines in the neocortex [Fig. 4(e)] by selectively displaying pixels according to signal levels that were not measured in conventional imaging (pixels highlighted with the blue dotted square in Fig. 4(f ). Magnified images of the regions marked by the red dotted squares in Fig. 4(e) were further compared by obtaining 9 µm thick z-stack volumes [maximum intensity projections (MIPs) shown in Figs. 4(g) and 4(h), respectively]. Both thin dendrites and tiny spines emitting a weak fluorescent signal were clearly observed in SI-HDR images, which were covered with noise using uniform illuminations (supplementary material, Movie 1). We next performed a 3D deconvolution process using commercial deconvolution software (AutoQuantX2, Media Cybernetics, Inc.) to demonstrate the advantage in background signal suppression. Due to the low SNR, no significant difference was obtained through deconvolution using data acquired with uniform illumination [Fig. 4(i)]. In contrast, fine neuronal structures were clearly observed using the deconvolved SI-HDR image data [Fig. 4( j)], which was originally hidden behind the out-of-focus blur.

FIG. 4.

SI-HDR whole mouse brain slice imaging. Raw data of a 20-µm-thick mouse brain slice acquired with (a) uniform illumination intensity and (b) SI-HDR illumination. (c) SI-HDR illumination pattern, and (d) reconstructed HDR image using Figs. 4(b) and 4(c). (e) Signal intensity enhanced region (yellow) corresponding to signal intensity range marked by the dotted blue square in (f ) log scaled pixel histogram (g) and (h) Magnified gamma (0.5) rescaled images of 9 µm thick volume MIPs of the red boxed regions the primary somatosensory cortex in Fig. 4(e). (i) and ( j) Deconvolution results of Figs. 4(g) and 4(h). Arrows are guides to the eye pointing to dendritic spines that are clearly observed with enhanced SNR. Scalebar: 1000, 10 µm in (a), (b), (d), and (g)–( j), respectively.

FIG. 4.

SI-HDR whole mouse brain slice imaging. Raw data of a 20-µm-thick mouse brain slice acquired with (a) uniform illumination intensity and (b) SI-HDR illumination. (c) SI-HDR illumination pattern, and (d) reconstructed HDR image using Figs. 4(b) and 4(c). (e) Signal intensity enhanced region (yellow) corresponding to signal intensity range marked by the dotted blue square in (f ) log scaled pixel histogram (g) and (h) Magnified gamma (0.5) rescaled images of 9 µm thick volume MIPs of the red boxed regions the primary somatosensory cortex in Fig. 4(e). (i) and ( j) Deconvolution results of Figs. 4(g) and 4(h). Arrows are guides to the eye pointing to dendritic spines that are clearly observed with enhanced SNR. Scalebar: 1000, 10 µm in (a), (b), (d), and (g)–( j), respectively.

Close modal

Although we demonstrated that SI-HDR can effectively increase the DR for thin biological samples such as cultured neurons or thin brain slices, SI-HDR itself does not have depth sectioning capabilities, which are required for volumetric imaging where the out of focus background ruins the information content. This can also be seen in Fig. 4, where the out of focus background reduces the image contrast even for the relatively thin 20 µm thin brain slice. Fortunately, SI-HDR can be applied to all SI or widefield imaging methods in general. Exploiting the general applicability of SI-HDR, we realized depth-selective HDR imaging by adapting the concepts of HiLo microscopy to incorporate SI-HDR HiLo microscopy.32–34 In brief, the sinusoidal illumination pattern in HiLo was additionally modulated to adapt to the sample fluorescence intensity distribution (see Fig. S4). Ideally, the acquired image will be a perfect grid pattern, which is then processed using the SI-HDR HiLo illumination pattern to obtain the true data used for HiLo reconstruction. Experimentally, we chose to use light intensities according to DMD index ranges between 8 and 128 for the volumetric imaging using HiLo structured illumination. The range of intensities utilized for the structured illumination was empirically chosen to reduce photobleaching of out-of-focus planes and to obtain sufficient modulation contrast even for the minimally illuminated regions.

To verify the enhancement in depth sectioning capabilities, we prepared a 100-µm-thick slice of GFP-stained mouse brain and imaged a 990 × 990 µm2 area in the hippocampus. For comparison, we obtained images with conventional widefield imaging [Fig. 5(a)], HiLo [Fig. 5(b)], and SI-HDR HiLo [Fig. 5(c)]. As expected, conventional widefield imaging suffers from both limited DR and out of focus background [Figs. 5(a) and 5(d)]. HiLo shows improvement in suppressing the out-of-focus background, but suffers from limited DR. Furthermore, we can also see limitations in HiLo, especially when there is saturation or at low signal levels. Both signal saturation and low signal levels result in an incorrect sinusoidal intensity modulation, which is required for HiLo reconstruction, and therefore fail to discriminate structures that were at different depths due to either noisy or saturated signals in the data. In contrast, using SI-HDR HiLo [Figs. 5(c) and 5(f)], we can see correct depth sectioning for structures that failed to be detected using conventional HiLo (supplementary material, Movie 2).

FIG. 5.

SI-HDR HiLo for high-resolution depth sectioned imaging. 100-µm-thick opaque mouse brain slice images captured by (a) WF, (b) HiLo, and (c) SI-HDR HiLo (d)–(f) Magnified images of the corresponding highlighted ROIs for the respective images. Scalebar: 100 and 20 µm in (a)–(c), and (d)–(f), respectively.

FIG. 5.

SI-HDR HiLo for high-resolution depth sectioned imaging. 100-µm-thick opaque mouse brain slice images captured by (a) WF, (b) HiLo, and (c) SI-HDR HiLo (d)–(f) Magnified images of the corresponding highlighted ROIs for the respective images. Scalebar: 100 and 20 µm in (a)–(c), and (d)–(f), respectively.

Close modal

Encouraged by the enhanced depth sectioning results using SI-HDR, we next performed 3D HDR imaging. Here, the SI pattern was adaptively adjusted automatically per depth based on images collected from a single prior widefield image. This procedure was again performed while the sample stage was moving, and the total 3D volume acquisition time was therefore identical to when using conventional HiLo imaging. 3D imaging was performed on a 120-µm-thick coronal section in the hippocampus area of an optically cleared Thy1-YFP mouse brain (see section on sample preparation in the supplementary material). To compare 3D reconstructed results, we obtained conventional HiLo images using relatively low illumination intensities with homogeneous illumination just below saturation [Fig. 6(a)], and SI-HDR HiLo imaging [Fig. 6(b)] in 500 nm z-steps using a piezo sample stage (minimum step size: 7.6 nm). The 3D reconstructions were rendered using the commercial software Imaris.

FIG. 6.

Depth-sectioned images with the HiLo microscopy. 3D reconstruction of hippocampal pyramidal neurons in 120-µm-thick coronal sections of optically cleared mouse brain obtained by (a) conventional HiLo and (b) SI-HDR HiLo, respectively. Maximum intensity projection (MIP) images of 40-µm-thick volumes descending from the surface are measured by (c)–(e) conventional HiLo and (f)–(h) SI-HDR HiLo, respectively. Scalebar: 20 µm.

FIG. 6.

Depth-sectioned images with the HiLo microscopy. 3D reconstruction of hippocampal pyramidal neurons in 120-µm-thick coronal sections of optically cleared mouse brain obtained by (a) conventional HiLo and (b) SI-HDR HiLo, respectively. Maximum intensity projection (MIP) images of 40-µm-thick volumes descending from the surface are measured by (c)–(e) conventional HiLo and (f)–(h) SI-HDR HiLo, respectively. Scalebar: 20 µm.

Close modal

Optically sectioned image blocks of 40 µm thickness centered at corresponding depths (20, 60, and 100 μm) are shown in Figs. 6(c)6(h) and supplementary material, Movie 3. At each according block of tissue, a large proportion of highly varying complex structures across the FOV were only identifiable in the SI-HDR HiLo results [Figs. 6(f )6(h)]. In general, HiLo microscopy has a speed ∼10 times faster than that of conventional confocal microscopy, which is currently widely used for optically sectioned 3D imaging.32 In confocal microscopy, limitations due to the limited DR are actually more severe due to the narrower DR of photomultiplier tubes compared to sCMOS sensors. Commercial confocal microscopes therefore often also have HDR imaging options, which take multiple image acquisitions, using different illumination intensities or different pixel dwell times. This, however, slows down the already slow framerate and also increases photobleaching. In our current demonstration, the 3D image stack using SI-HDR HiLo takes the same amount of time as conventional HiLo imaging (500 ms to obtain two images per depth plane, a total of 120 s to acquire 241 slices). We expect that SI-HDR can be especially useful for fast high-resolution imaging of large optically cleared tissues, which are rapidly becoming available but face difficulties in direct application of laser scanning optical sectioning methods due to their slow speed.

We demonstrated SI-HDR microscopy that can expand the DR by modulating incident light intensity into arbitrary high-resolution illumination patterns customized to the object of interest by simply adding a DMD to a conventional widefield microscope. Extended DR imaging was demonstrated on cultured neurons and thin/thick mouse brain slices with high variability in fluorescence distribution. We found the method to be especially useful for high-throughput, high-dynamic-range microscopy, such as large FOV stitching or large-volume 3D imaging. Here, the SI-HDR illumination pattern could be identified during the sample stage movement cycle enabling the total acquisition time to be maintained the same as in conventional widefield imaging.

Although our current implementation focuses on HDR imaging for high-throughput imaging of static samples, the method can also be applied for dynamic functional imaging. For example, in calcium imaging, detecting a single action potential is difficult due to the small increase in fluorescence signal. Since SI-HDR imaging can adapt the illumination pattern in millisecond time scales, including all the processing steps including camera acquisition, pattern calculation, and pattern illumination, SI-HDR can potentially follow the transient calcium dynamics for a single action potential. Successful realization of such a scheme will potentially help identify neuronal signals that were previously undetected. In our current demonstration, the SI-HDR acquisition speed was limited by the DMD refresh rate. For the DMD utilized in our work, only a fixed number of refresh rates were selectable for continuous streaming mode. The streaming frequency of our DMD was available to be set at 60 or 120 Hz for the external display, while the maximum frame rate of the camera was at 100 Hz (for full-frame imaging). Therefore, the camera’s temporal resolution had to be sacrificed to 60 Hz to match the DMD refresh rate. As the limitation is not due to DMD hardware, but rather control software, by applying more sophisticated DMD control methods, various refresh rates should be possible in principle to match future dynamic application requirements.

Our current implementation of SI-HDR realizes pixel-by-pixel alignment of the DMD and sCMOS cameras at a pixel-by-pixel scale resolution. We also anticipate that further adaptions with various other SI imaging modalities may be possible, for instance, to increase the resolution by employing super-resolution SIM57–60 or to enable hyperspectral imaging61 capabilities. As we demonstrated in HiLo, super-resolution SIM is also known to be prone to artifacts related to low signal levels, which can potentially benefit from SI-HDR. Considering the multiple demonstrated applications, SI-HDR is general and simple to implement, which can bring wide dissemination of the method for a broad range of applications. We envision that SI-HDR can open a new window for the discovery of new biological phenomena that were previously unobservable due to hardware DR limitations.

See the supplementary material for further system details, validation figures, and videos.

This work was supported by the National Research Foundation of Korea (2017M3C7A1044966, 2019M3E5D2A01063812, 2019M3E5D2A01063794, 2020R1A6A3A01097999, 2021R1A2C3012903, and 2021R1A4A1031644) and the TJ Park Foundation.

The authors declare no conflicts of interest.

T.W. and J.P. designed the project; T.W. built the instrument and performed experiments; H.K., S.Y.K., S.K.K., and J.I.K. provided mouse brain samples and guidance on imaging and analysis; B.H., and C.A. provided analytical tools; T.W. and J.P. analyzed the data and wrote the manuscript with input from all authors; J.P. conceived and supervised the project.

Taeseong Woo: Data curation (lead); Formal analysis (lead); Funding acquisition (supporting); Investigation (lead); Methodology (equal); Visualization (lead); Writing – original draft (equal). Hye Yun Kim: Methodology (equal); Writing – original draft (supporting). Su Yeon Kim: Methodology (supporting). Byungjae Hwang: Investigation (supporting); Methodology (supporting). Cheolwoo Ahn: Investigation (supporting); Methodology (supporting). Seok-Kyu Kwon: Funding acquisition (supporting); Methodology (supporting); Writing – original draft (supporting). Jae-Ick Kim: Funding acquisition (supporting); Methodology (supporting); Writing – original draft (supporting). Jung-Hoon Park: Conceptualization (lead); Data curation (equal); Funding acquisition (lead); Investigation (lead); Methodology (lead); Project administration (lead); Resources (lead); Supervision (lead); Writing – original draft (lead); Writing – review & editing (lead).

All animal experimental procedures were conducted in accordance with protocols approved by the Institutional Animal Care and Utilization Committee of UNIST.

The data that support the findings of this research are available from the corresponding author upon reasonable request.

1.
E. J. M.
PaulDebevec
, “
Recovering high dynamic range radiance maps from photographs
,” in
Proceeding of ACM SIGGRAPH 1997
(
ACM Press/Addison-Wesley Publishing Co
,
1997
), pp.
369
378
.
2.
M. U.
Sing Bing Kang
,
W.
Simon
, and
R.
Szeliski
, “
High dynamic range video
,”
ACM Trans. Graphics
22
,
319
325
(
2003
).
3.
T.
Jinno
and
M.
Okuda
, “
Multiple exposure fusion for high dynamic range image acquisition
,”
IEEE Trans. Image Process.
21
,
358
365
(
2012
).
4.
S.
Li
, “
Fast multi-exposure image fusion with median filter and recursive filter
,”
IEEE Trans. Consumer Electron.
58
,
626
632
(
2012
).
5.
I.
Merianos
and
N.
Mitianoudis
, “
Multiple-exposure image fusion for HDR image synthesis using learned analysis transformations
,”
J. Imaging
5
,
32
(
2019
).
6.
S. K.
Nayar
and
T.
Mitsunaga
, “High dynamic range imaging: Spatially varying pixel exposures,” in
Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2000)
(
IEEE
,
2000
).
7.
S. K.
Nayar
and
B.
Vlad
, “
Adaptive dynamic range imaging: Optical control of pixel exposures over space and time
,” in
Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV’03)
,
2003
.
8.
J. W.
Lichtman
and
J.-A.
Conchello
, “
Fluorescence microscopy
,”
Nat. Methods
2
,
910
919
(
2005
).
9.
T.-W.
Chen
 et al, “
Ultrasensitive fluorescent proteins for imaging neuronal activity
,”
Nature
499
,
295
300
(
2013
).
10.
H.
Dana
 et al, “
High-performance calcium sensors for imaging activity in neuronal populations and microcompartments
,”
Nat. Methods
16
,
649
657
(
2019
).
11.
N. C.
Shaner
 et al, “
Improving the photostability of bright monomeric orange and red fluorescent proteins
,”
Nat. Methods
5
,
545
551
(
2008
).
12.
R.
Rizzuto
 et al, “
Double labelling of subcellular structures with organelle-targeted GFP mutants in vivo
,”
Curr. Biol.
6
,
183
188
(
1996
).
13.
R. A.
Hoebe
 et al, “
Controlled light-exposure microscopy reduces photobleaching and phototoxicity in fluorescence live-cell imaging
,”
Nat. Biotechnol.
25
,
249
253
(
2007
).
14.
R. A.
Hoebe
,
H. T. M.
Van der Voort
,
C. J. F.
Van Noorden
, and
E. M. M.
Manders
, “
Quantitative determination of the reduction of phototoxicity and photobleaching by controlled light exposure microscopy
,”
J. Microsc.
231
,
9
20
(
2008
).
15.
R.
Yang
,
T. D.
Weber
,
E. D.
Witkowski
,
I. G.
Davison
, and
J.
Mertz
, “
Neuronal imaging with ultrahigh dynamic range multiphoton microscopy
,”
Sci. Rep.
7
,
5817
(
2017
).
16.
K.
Kengyeh
,
D. L.
Chu
, and
M.
Jerome
, “
Practical implementation of log-scale active illumination microscopy
,”
Biomed. Opt. Express
1
,
236
245
(
2010
).
17.
K. K.
Chu
,
D.
Lim
, and
J.
Mertz
, “
Enhanced weak-signal sensitivity in two-photon microscopy by adaptive illumination
,”
Opt. Lett.
32
,
2846
2848
(
2007
).
18.
C.
Vinegoni
 et al, “Real-time high dynamic range laser scanning microscopy,”
Nat. Commun.
7
,
11077
(
2016
).
19.
C.
Vinegoni
,
P. F.
Feruglio
, and
R.
Weissleder
, “
High dynamic range fluorescence imaging
,”
IEEE J. Sel. Top. Quantum Electron.
25
,
1
7
(
2018
).
20.
P.
Feruglio
,
C.
Vinegoni
, and
R.
Weissleder
, “
Extended dynamic range imaging for noise mitigation in fluorescence anisotropy imaging
,”
J. Biomed. Opt.
25
,
086003
(
2020
).
21.
P. T.
Van
,
V.
Bass
,
D.
Shiwarski
,
F.
Lanni
, and
J.
Minden
, “
High dynamic range proteome imaging with the structured illumination gel imager
,”
Electrophoresis
35
,
2642
2655
(
2014
).
22.
L.
Schermelleh
,
R.
Heintzmann
, and
H.
Leonhardt
, “
A guide to super-resolution fluorescence microscopy
,”
J. Cell Biol.
190
,
165
175
(
2010
).
23.
A. A.
Adeyemi
,
N.
Barakat
, and
T. E.
Darcie
, “
Applications of digital micro-mirror devices to digital optical microscope dynamic range enhancement
,”
Opt. Express
17
,
1831
1843
(
2009
).
24.
S. K.
Nayar
and
V.
Branzoi
, “
Adaptive dynamic range imaging: optical control of pixel exposures over space and time
,” in (
IEEE
,
2003
), pp.
1168
1175
.
25.
S. K.
Nayar
,
V.
Branzoi
, and
T. E.
Boult
, in
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004. I-I
(
IEEE
,
2004
).
26.
K.
Nam
and
J.-H.
Park
, “
Increasing the enhancement factor for DMD-based wavefront shaping
,”
Opt. Lett.
45
,
3381
3384
(
2020
).
27.
D. B.
Conkey
,
A. M.
Caravaca-Aguirre
, and
R.
Piestun
, “
High-speed scattering medium characterization with application to focusing light through turbid media
,”
Opt. Express
20
,
1733
1740
(
2012
).
28.
M.
Abolbashari
,
F.
Farahi
,
F.
Magalhaes
,
F. M.
Araújo
, and
M. V.
Correia
, “
High dynamic range compressive imaging: A programmable imaging system
,”
Opt. Eng.
51
,
071407
(
2012
).
29.
S.-B.
Zhao
,
L.-Y.
Liu
, and
M.-Y.
Ma
, “
Adaptive high-dynamic range three-dimensional shape measurement using DMD camera
,”
IEEE Access
7
,
67934
67943
(
2019
).
30.
O.
Bimber
,
D.
Klöck
,
T.
Amano
,
A.
Grundhöfer
, and
D.
Kurz
, “
Closed-loop feedback illumination for optical inverse tone-mapping in light microscopy
,”
IEEE Trans. Visualization Comput. Graphics
17
,
857
870
(
2010
).
31.
Y.
Qiao
,
X.
Xu
,
T.
Liu
, and
Y.
Pan
, “
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range
,”
Appl. Opt.
54
,
60
70
(
2015
).
32.
D.
Lim
,
T. N.
Ford
,
K. K.
Chu
, and
J.
Mertz
, “
Optically sectioned in vivo imaging with speckle illumination HiLo microscopy
,”
J. Biomed. Opt.
16
,
016014
(
2011
).
33.
J.
Schniete
 et al, “
Fast optical sectioning for widefield fluorescence mesoscopy with the mesolens based on HiLo microscopy
,”
Sci. Rep.
8
,
16259
(
2018
).
34.
Q.
Zhang
,
D.
Pan
, and
N.
Ji
, “
High-resolution in vivo optical-sectioning widefield microendoscopy
,”
Optica
7
,
1287
1290
(
2020
).
35.
Z.
Li
,
K.-I.
Okamoto
,
Y.
Hayashi
, and
M.
Sheng
, “
The importance of dendritic mitochondria in the morphogenesis and plasticity of spines and synapses
,”
Cell
119
,
873
887
(
2004
).
36.
M. E.
Chicurel
and
K. M.
Harris
, “
Three‐dimensional analysis of the structure and composition of CA3 branched dendritic spines and their synaptic relationships with mossy fiber boutons in the rat hippocampus
,”
J. Comp. Neurol.
325
,
169
182
(
1992
).
37.
T. L.
Lewis
,
S. K.
Kwon
,
A.
Lee
,
R.
Shaw
, and
F.
Polleux
, “
MFF-dependent mitochondrial fission regulates presynaptic release and axon branching by limiting axonal mitochondria size
,”
Nat. Commun.
9
,
5008
(
2018
).
38.
A. S.
Dickey
and
S.
Strack
, “
PKA/AKAP1 and PP2A/Bβ2 regulate neuronal morphogenesis via Drp1 phosphorylation and mitochondrial bioenergetics
,”
J. Neurosci.
31
,
15716
15726
(
2011
).
39.
T. L.
Lewis
, Jr
,
J.
Courchet
, and
F.
Polleux
, “
Cellular and molecular mechanisms underlying axon formation, growth, and branching
,”
J. Cell Biol.
202
,
837
848
(
2013
).
40.
M. H.
Yan
,
X.
Wang
, and
X.
Zhu
, “
Mitochondrial defects and oxidative stress in Alzheimer disease and Parkinson disease
,”
Free Radicals Biol. Med.
62
,
90
101
(
2013
).
41.
D. J.
Bonda
 et al, “
Neuronal failure in Alzheimer’s disease: A view through the oxidative stress looking-glass
,”
Neurosci. Bull.
30
,
243
252
(
2014
).
42.
S.
Arun
,
L.
Liu
, and
G.
Donmez
, “
Mitochondrial biology and neurological diseases
,”
Curr. Neuropharmacol.
14
,
143
154
(
2016
).
43.
B. H.
Varkuti
 et al, “
Neuron-based high-content assay and screen for CNS active mitotherapeutics
,”
Sci. Adv.
6
,
eaaw8702
(
2020
).
44.
A.
Sorvina
 et al, “
Mitochondrial imaging in live or fixed tissues using a luminescent iridium complex
,”
Sci. Rep.
8
,
8191
(
2018
).
45.
R. J.
Giedt
,
D. R.
Pfeiffer
,
A.
Matzavinos
,
C.-Y.
Kao
, and
B. R.
Alevriadou
, “
Mitochondrial dynamics and motility inside living vascular endothelial cells: Role of bioenergetics
,”
Ann. Biomed. Eng.
40
,
1903
1916
(
2012
).
46.
E.
Danielson
and
S. H.
Lee
, “
SynPAnal: Software for rapid quantification of the density and intensity of protein puncta from fluorescence microscopy images of neurons
,”
PloS One
9
,
e115298
(
2014
).
47.
P.
Verstraelen
 et al, “
Image-based profiling of synaptic connectivity in primary neuronal cell culture
,”
Front. Neurosci.
12
,
389
(
2018
).
48.
J. C.
Fiala
,
J.
Spacek
, and
K. M.
Harris
, “
Dendritic spine pathology: Cause or consequence of neurological disorders?
,”
Brain Res. Rev.
39
,
29
54
(
2002
).
49.
D.
Zhang
 et al, “
Automated 3D soma segmentation with morphological surface evolution for neuron reconstruction
,”
Neuroinformatics
16
,
153
166
(
2018
).
50.
W. C.
Risher
,
T.
Ustunkaya
,
J.
Singh Alvarado
, and
C.
Eroglu
, “
Rapid Golgi analysis method for efficient and unbiased classification of dendritic spines
,”
PloS One
9
,
e107591
(
2014
).
51.
A.
Orth
and
K. B.
Crozier
, “
High throughput multichannel fluorescence microscopy with microlens arrays
,”
Opt. Express
22
,
18101
18112
(
2014
).
52.
J.
Chalfoun
 et al, “
MIST: Accurate and scalable microscopy image stitching tool with stage modeling and error minimization
,”
Sci. Rep.
7
,
4988
(
2017
).
53.
A.
Orth
,
M. J.
Tomaszewski
,
R. N.
Ghosh
, and
E.
Schonbrun
, “
Gigapixel multispectral microscopy
,”
Optica
2
,
654
662
(
2015
).
54.
S.
Preibisch
,
S.
Saalfeld
, and
P.
Tomancak
, “
Globally optimal stitching of tiled 3D microscopic image acquisitions
,”
Bioinformatics
25
,
1463
1465
(
2009
).
55.
G.
Feng
 et al, “
Imaging neuronal subsets in transgenic mice expressing multiple spectral variants of GFP
,”
Neuron
28
,
41
51
(
2000
).
56.
C.
Porrero
,
P.
Rubio-Garrido
,
C.
Avendaño
, and
F.
Clascá
, “
Mapping of fluorescent protein-expressing neurons and axon pathways in adult and developing Thy1-eYFP-H transgenic mice
,”
Brain Res.
1345
,
59
72
(
2010
).
57.
T.
Woo
 et al, “
Tunable SIM: Observation at varying spatiotemporal resolutions across the FOV
,”
Optica
7
,
973
(
2020
).
58.
M.
Saxena
,
G.
Eluru
, and
S. S.
Gorthi
, “
Structured illumination microscopy
,”
Adv. Opt. Photonics
7
,
241
(
2015
).
59.
R.
Heintzmann
and
T.
Huser
, “
Super-resolution structured illumination microscopy
,”
Chem. Rev.
117
,
13890
13908
(
2017
).
60.
X.
Huang
 et al, “
Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy
,”
Nat. Biotechnol.
36
,
451
459
(
2018
).
61.
A. D.
Chandra
,
M.
Karmakar
,
D.
Nandy
, and
A.
Banerjee
, “
Adaptive hyperspectral imaging using structured illumination in a spatial light modulator-based interferometer
,”
Opt. Express
30
,
19930
(
2022
).

Supplementary Material