In multicolor fluorescence microscopy, it is crucial to orient biological structures at a single-cell resolution based on precise anatomical annotations of cytoarchitecture images. However, during synchronous multicolor imaging, due to spectral mixing, the crosstalk from the blue signals of 4′,6-diamidino-2-phenylindole (DAPI)-stained cytoarchitecture images to the green waveband hinders the visualization and identification of green signals. Here, we proposed a deep learning-based framework named the crosstalk elimination and cytoarchitecture enhancement pipeline (CECEP) to simultaneously acquire crosstalk-free signals in the green channel and high-contrast DAPI-stained cytoarchitecture images during multicolor fluorescence imaging. For the CECEP network, we proposed an unsupervised learning algorithm named the cytoarchitecture enhancement network (CENet), which increased the signal-to-background ratio (SBR) of the cytoarchitecture images from 1.5 to 15.0 at a reconstruction speed of 25 Hz for 1800 × 1800 pixel images. The CECEP network is widely applicable to images of different quality, different types of tissues, and different multicolor fluorescence microscopy. In addition, the CECEP network can also facilitate various downstream analysis tasks, such as cell recognition, structure tensor calculation, and brain region segmentation. With the CECEP network, we simultaneously acquired two specific fluorescence-labeled neuronal distributions and their colocated high-SBR cytoarchitecture images without crosstalk throughout the brain. Experimental results demonstrate that our method could potentially facilitate multicolor fluorescence imaging applications in biology, such as revealing and visualizing different types of biological structures with precise locations and orientations.

Imaging cells of interest with cytoarchitectures labeled with different colors via multicolor fluorescence microscopy is crucial to studying the relationships and interactions among various cells and other components.1–7 Since green and red fluorescent proteins are generally bright and widely used, the corresponding cytoarchitectures are frequently acquired in the blue waveband with 4-6-diamidino-2-phenylindole (DAPI) staining.8–11 However, the crosstalk among DAPI signals interferes with the identification of green signals since part of the DAPI emission spectrum is in the green band during simultaneous multicolor imaging. An alternating multicolor imaging strategy has been developed to prevent crosstalk but also lead to increased imaging time,12 which is not acceptable in large-volume sample imaging. Another alternative method is to calculate the color spread matrix, which quantifies the amount that each color spreads to other color bands; however, this method is susceptible to noise, and residual crosstalk may be retained.13 In addition, separating the imaging focal planes of the blue and green channels by using the limited penetration depth of DAPI signals and the axial chromatic aberrations of an optical system is capable of achieving crosstalk-free imaging.14 However, the optical configuration needs to be modified, and the optical sectioning effect of DAPI signals is reduced. Thus, a highly robust and universal method to eliminate the crosstalk among DAPI signals in simultaneous multicolor imaging is still lacking and needs to be developed.

The intensity of the crosstalk among DAPI signals in the green channel is ∼30% of the signal intensity in the blue channel. Therefore, the crosstalk among DAPI signals can be eliminated by gradually reducing the concentration of DAPI staining until the crosstalk intensity is similar to that of the autofluorescence background in the green channel. Although this method is easier to implement than the aforementioned crosstalk elimination methods, the signal-to-background ratio (SBR) of the DAPI signals is reduced. To address this issue, the affected DAPI signals must be enhanced. There are many classic image enhancement methods, such as contrast-limited adaptive histogram equalization (CLAHE)15 and adaptive contrast enhancement (ACE).16 However, these algorithms are based on local greyscale adjustment and may enhance noise and cause stitching artifacts for images with low SBRs. As a leading data-driven approach, deep learning has been demonstrated to be more effective and applicable than traditional methods for contrast enhancement17–21 and other image improvements22–24 in fluorescence microscopy. One common training strategy for contrast enhancement methods is supervised learning based on a large set of well-registered images with low and high SBRs. However, this training method requires a considerable number of training samples and imaging time, as well as methods for achieving accurate registration. To simplify the acquisition of paired training data, a content-aware image restoration (CARE) technique used low-frequency Perlin noise similar to background fluorescence convolved with the point spread function of the microscope to simulate low SBR images.18 However, these synthetic paired training data were generated with flawed simulations and sometimes did not generalize well to real low SBR images. Recently, the development of the cycle-consistent generative adversarial network (CycleGAN) has enabled the training of neural networks using unpaired data.25 However, the network, which uses only the weak constraint introduced by cycle consistency, is prone to generate noise artifacts and cause structural distortion in the output images when directly learning the image transformation from the low-SBR domain to the high-SBR domain, introducing risks in real biomedical applications.

Here, we present a two-stage deep learning approach known as the cytoarchitecture enhancement network (CENet) to improve the SBR of cytoarchitecture images with fast inference speed and high reconstruction fidelity. We employed unpaired low-SBR DAPI-stained cytoarchitecture images and high-SBR cytoarchitecture images for training. To address the issues of noise artifacts encountered with previous unsupervised networks, during the training of CENet, an unsupervised learning method is used to simulate low-SBR cytoarchitecture images based on existing high-SBR cytoarchitecture images; then, a supervised training strategy is employed to impose a strong constraint on the enhancement results to ensure high-fidelity reconstruction results. Our method enables an increase in the SBR of DAPI-stained cytoarchitecture images from 1.5 to 15.0 at a reconstruction speed of 25 Hz for 1800 × 1800 pixel images. Using this method, we proposed a crosstalk elimination and cytoarchitecture enhancement pipeline (CECEP) to eliminate DAPI crosstalk in synchronous multicolor fluorescence microscopy. With our pipeline, we directly reduced the concentration of DAPI staining while enhancing the affected DAPI-stained cytoarchitecture images with CENet. We validated the reliability and effectiveness of the CECEP method for different samples and imaging systems and demonstrated its benefits to downstream automatic image analysis tasks, such as cell segmentation, recognition, and structure tensor calculations. With the CECEP network, we simultaneously acquired green and red fluorescently labeled neuron signals with precise anatomical annotations throughout the brain, which enabled us to analyze the interrelationship between different types of neurons and their circuits in the whole brain.

The CECEP framework is shown in Fig. 1, which consists of three steps. The first step is to identify the appropriate concentration of DAPI staining. As the concentration is reduced, the crosstalk among the DAPI signals is gradually attenuated to the intensity of the autofluorescence background and becomes negligible in the green channel [Fig. 1(a)]. However, this approach also reduces the signal quality of the DAPI-stained cytoarchitecture images acquired in the blue channel. Therefore, the second step is to train a network to enhance the affected cytoarchitecture images. The training mode and workflow of CENet are illustrated in Fig. 1(b); low-SBR DAPI cytoarchitecture images are used as the input data and unpaired high-SBR cytoarchitecture images (DAPI- or PI-stained cytoarchitecture images with satisfying contrast) as the target data to train the CENet model. In this study, we found that the forward network of CycleGAN for learning image degradation was better and more stable than the backward network for image enhancement (Fig. S1). Therefore, we employed CycleGAN to learn the transformation from the high-SBR cytoarchitecture images to the low-SBR cytoarchitecture images rather than directly reconstructing the high-SBR cytoarchitecture images. With the realistic low-SBR degraded images, we acquired pairs of simulated low-SBR and real high-SBR cytoarchitecture images as training data and employed a supervised training scheme to impose strong constraints on the enhancement results, thereby guaranteeing that the output results did not contain artifacts.

FIG. 1.

The scheme of the CECEP framework. (a) Eliminating the crosstalk among the DAPI signals by reducing the concentration of DAPI staining. (b) Training a cytoarchitecture enhancement network (CENet). (c) Online enhancement of cytoarchitecture images during the imaging process. Scale bar: 50 μm (a).

FIG. 1.

The scheme of the CECEP framework. (a) Eliminating the crosstalk among the DAPI signals by reducing the concentration of DAPI staining. (b) Training a cytoarchitecture enhancement network (CENet). (c) Online enhancement of cytoarchitecture images during the imaging process. Scale bar: 50 μm (a).

Close modal
The training workflow of CENet consists of two stages. In the first stage, a low-SBR image degradation model is implemented, and in the second stage, a cytoarchitecture image enhancement model is implemented. For the degradation modeling stage, the CycleGAN network structure consists of two generators and corresponding discriminators (Fig. S2). Herein, the forward generator GA was trained to produce authentic images that matched the statistical properties of the low-SBR cytoarchitecture domain. In contrast, the backward generator GB was trained to produce images that matched the properties of the high-SBR cytoarchitecture domain. CycleGAN utilizes adversarial losses and cycle consistency to ensure a stable transformation from the low-SBR domain to the high-SBR domain. The adversarial loss is a perceptual loss function that causes the generators to produce more realistic results. The loss function of this adversarial training framework LGAN can be defined as
LGAN=LG+LD,LG=1Ni=1NDAGA(xi)1||2+1Ni=1NDBGB(yi)1||2,LD=1Ni=1NDAGA(xi)0||2+1Ni=1NDAyi1||2+1Ni=1NDBGB(yi)0||2+1Ni=1NDBxi1||2,
(1)
where LGAN consists of the loss function of the generator LG and discriminator LD. GA and GB represent the forward and backward generators, respectively. DA and DB represent discriminators A and B for high-SBR and low-SBR images, respectively. xi and yi are the image patches in the high-SBR domain with low intensity and low-SBR domain, respectively. Note that the high-SBR cytoarchitecture domain sometimes has a higher intensity than that of the low-SBR domain. To make the network learning better, we reduce the intensity of the high-SBR domain by multiplying a coefficient to make it similar to that of the low-SBR domain. In addition to the adversarial training loss, we employed the cycle-consistency loss LCycle and identity loss LIden, which ensured that the generator produced convincing but irrelevant images and was defined as
LCycle=1Ni=1NGBGA(xi)xi||1+1Ni=1NGAGB(yi)yi||1,LIden=1Ni=1NGA(yi)yi||1+1Ni=1NGB(xi)xi||1.
(2)
In the cytoarchitecture enhancement stage, the weights of GA are fixed. The synthetic low-SBR image GA(yi) and pixel-aligned high-SBR image xi are used to train CENet. CENet is denoted here as H. The training loss LCENet is the weighted sum of the mean absolute error and structural similarity index measure (SSIM) losses,
LCENet=1Ni=1NHGA(xi)zi||1+σ1SSIMHGAxi,zi,
(3)
where zi is the high-SBR image before reducing intensity and σ is a weight term and is set as 0.1. The SSIM26 is defined as
SSIMM,N=2μMμN+C12σMN+C2μM2+μN2+C1σM2+σN2+C2,
(4)
where μM and μN represent the mean values of the network output M and ground truth (GT) image N, σM and σN are the standard deviations of M and N, respectively, and σMN is the covariance between the output image and GT image. C1 and C2 are two constants to prevent denominator values close to zero. The output range of the SSIM metric is between 0 and 1, with a value closer to 1 suggesting less distortion.
In addition, the feedback loss Lfeedback is introduced in CENet to guide GA to generate realistic degraded low-SBR images. The feedback loss Lfeedback has the same form as the deblurring loss LCENet, except that the weights of H are fixed instead of those of GA. The total training loss is the combination of LGAN, LCycle, LIden, and Lfeedback with different weights and is given by
L=LGAN+λLCycle+βLIden+ρLfeedback,
(5)
where λ, β, and ρ are the weights of LCycle, LIden, and Lfeedback and are set as 10, 5, and 0.1, respectively.

The coefficients of the generators and discriminators in CycleGAN and CENet are all optimized via the Adam optimizer,27 with β1 = 0.5 and β2 = 0.999. The learning rates of all networks are set as 2 × 10−4 and decayed by half in the eighth epoch. The number of training epochs is set to 15, and the mini-batch size is chosen as 1. It took ∼4 h for the networks to converge on a single Nvidia GeForce RTX 3090 card (24 GB of memory). The network size of CENet is only ∼2 MB; therefore, CENet has a fast inference speed for 1800 × 1800 pixel images at 25 Hz.

After training, the third step is to use the well-trained CENet model to enhance the low-SBR DAPI-stained cytoarchitecture images online during synchronous multicolor imaging. Then, we acquire the corresponding crosstalk-free green and red channel signals without post-processing after imaging acquisition [Fig. 1(c)].

To validate the cytoarchitecture image enhancement performance of CENet, we compared our method with two typical contrast enhancement algorithms (CLAHE and ACE), an unsupervised algorithm (CycleGAN) and a self-supervised algorithm (CARE), and the results are shown in Fig. 2. We imaged a C57BL/6J mouse brain with 0.1 μg/ml DAPI and 2 μg/ml PI staining using multicolor wide-field large-volume tomography (WVT)3 and acquired several coronal images. The dyeing time for each coronal plane was ∼50 s. The SBR of the DAPI signals was 1.5. We used the PI-stained cytoarchitecture images with an SBR of 15.0 as the target high-SBR images. To train the CycleGAN models, we shuffled the DAPI and PI images and chose five pairs of coronal images. To train the CARE model, we used existing high-SBR DAPI-stained cytoarchitecture images and semisynthetic low-SBR DAPI cytoarchitecture images as paired training data. In the simulation step, the intensity of the high-SBR DAPI images is first reduced by multiplying the images by a coefficient to ensure that the simulated images have similar signal intensity as the low-SBR DAPI images; then, low-frequency Perlin noise resembling background fluorescence is added.18 Each coronal image was cropped to 256 × 256 pixels. A total of 10 000 random image patches were used for training, and the other 2000 patches were used for validation and testing. All animal experiments followed procedures approved by the Institutional Animal Ethics Committee of Huazhong University of Science and Technology.

FIG. 2.

Comparison of different contrast enhancement algorithms. (a) Raw DAPI image and corresponding results using various contrast enhancement algorithms. (b) and (c) Enlarged views of the two ROIs indicated by the yellow dashed boxes and red solid boxes in the corresponding images in (a). (d) Normalized intensity profiles along the yellow dashed lines in (b). (e) Quantifying the SBR of each algorithm. Scale bars: 1 mm (a) and 20 μm [(b) and (c)].

FIG. 2.

Comparison of different contrast enhancement algorithms. (a) Raw DAPI image and corresponding results using various contrast enhancement algorithms. (b) and (c) Enlarged views of the two ROIs indicated by the yellow dashed boxes and red solid boxes in the corresponding images in (a). (d) Normalized intensity profiles along the yellow dashed lines in (b). (e) Quantifying the SBR of each algorithm. Scale bars: 1 mm (a) and 20 μm [(b) and (c)].

Close modal

Figures 2(a)2(c) show that the raw DAPI-stained image has an extremely low SBR, making it difficult to distinguish the cells and the background. In contrast, the high-SBR PI image includes obvious signals and a clean background, enabling the observation of cell outlines. CLAHE and ACE are both traditional adaptive greyscale adjustment algorithms. CLAHE is based on histogram equalization and enables contrast enhancement in regions of interest (ROIs) with dense and sparse cells. However, this algorithm also enhanced the background intensity and only slightly enhanced the contrast of the cytoarchitecture images [Figs. 2(b) and 2(c)]. The ACE method selects the target signals by low-pass filtering and then enhances the high-frequency signals. Compared with CLAHE, ACE enhanced the intensity of the target cells but also introduced noise artifacts, compromising the DAPI signals. Compared with traditional algorithms, deep learning-based methods have more considerable SBR improvements. However, when semisynthetic low-SBR training data were used, CARE generated blurry cytoarchitecture images on real low-SBR DAPI-stained cytoarchitecture images [Figs. 2(b) and 2(c)]. Due to the weak constraint introduced by cycle consistency, CycleGAN learns style transfer instead of high-fidelity outputs and easily produces artifacts. For example, the distortion in the reconstructed cytoarchitecture images caused the cells to stick together [Fig. 2(b)], and some fake cells were identified, as indicated by the dark red arrows in Fig. 2(c). Compared with CARE and CycleGAN, our CENet enhances the cytoarchitecture images while suppressing the background. In Fig. 2(b), the cell outlines of the respective cells can be observed in the area with dense signals. Figure 2(c) demonstrates the CENet output has a high SBR without artifacts and distortion.

Figure 2(d) shows the normalized intensity profiles along the yellow dotted lines in Fig. 2(c). The plot of the raw DAPI signals has no obvious peaks, and the cell signals are nearly indistinguishable from the background. Although the traditional algorithms could enhance the weak DAPI signals, these algorithms also enhanced the background noise. In contrast, the deep learning-based methods significantly enhanced the contrast compared to the traditional methods. Among them, the profile of CENet is the closest to the expected PI signals, with excellent SBR and high fidelity. To quantify the image quality of each enhanced image, we randomly chose ten ROIs in each result and calculated the SBR to reflect the background suppression capacity as follows:
SBR=pb,
(6)
where p is the maximal value of the signals in the selected ROIs, and b is the mean value of the surrounding background.

Figure 2(e) illustrates the SBR of DAPI and PI stained results as well as each algorithm applied to the raw DAPI-stained image, indicating that CENet has the best SBR enhancement capability among the compared algorithms. Specifically, CENet increases the SBR of the raw compromised DAPI-stained cytoarchitecture image from 1.40 to 15.20.

To demonstrate the robustness of CENet, we evaluated the effect of different SBRs on the cytoarchitecture enhancement performance of the networks. We selected a DAPI-stained cytoarchitecture image dataset with an SBR of 15.0 [Fig. 3(a)] and averaged the data with low-frequency Perlin noise with different intensities to simulate different SBR cytoarchitecture images.18 Then, we trained different CENet models and compared their performance. The cell outlines gradually become blurred as the SBR of the simulated images decreases [Fig. 3(b)]. The CENet results always maintain obvious outlines and clear backgrounds despite the slight distortion when the SBR is 1.1. To quantify the reliability of the network, we tested the networks based on ten groups of 300 × 300-pixel images and calculated the difference between the network outputs and GT using the SSIM and peak signal-to-noise ratio (PSNR) [Figs. 3(c) and 3(d)]. The SSIM is defined in Eq. (4), and the PSNR is defined as follows:
PSNR=10log10MAXMSE,
(7)
where MAX represents the maximum value of the image, which is 65 535 for a 16-bit image. m and n are the pixel sizes of the two dimensions of the image M and image N. A higher PSNR suggests less difference between the two images. As the SBR decreases from 5.3 to 1.1, the mean PSNR decreases from 30.28 to 25.51, and the SSIM decreases from 0.94 to 0.85. Therefore, CENet shows good cytoarchitecture image enhancement performance, even when the SBR is dropped to 1.1.
FIG. 3.

Network performance evaluation under different SBRs. (a) The GT high-SBR cytoarchitecture image. (b) The input low-contrast DAPI images and their CENet outputs, as well as the per-pixel error map. (c), (d) The PSNR and SSIM between the CENet outputs and GT for different SBRs. Scale bar: 30 μm (a).

FIG. 3.

Network performance evaluation under different SBRs. (a) The GT high-SBR cytoarchitecture image. (b) The input low-contrast DAPI images and their CENet outputs, as well as the per-pixel error map. (c), (d) The PSNR and SSIM between the CENet outputs and GT for different SBRs. Scale bar: 30 μm (a).

Close modal

To test the adaptability of CECEP to different imaging systems and samples, we imaged a B6.129P2(Cg)-Cx3cr1tm1Litt/J (Cx3cr1-GFP) transgenic mouse (Jackson Laboratory, Bar Harbor, ME, USA) brain and kidney, as well as a C57BL/6 mouse (Jackson Laboratory, Bar Harbor, ME, USA) brain with a homemade multicolor high-definition fluorescent micro-optical sectioning tomography (HD-fMOST) system.28 In the Cx3cr1-GFP mouse, microglial cells in the brain and immune cells in the kidney were labeled with green fluorescence. The brain and kidney samples were sectioned into 50 μm coronal slices with a vibrating slicer (VT 1200S, Leica, Wetzlar, Germany). The brain of the C57BL/6 mouse was sectioned into 10 μm. All the sections were separated into two groups at DAPI concentrations of 0.1 and 2 μg/ml with the same staining time of 50 s. In addition, the brain sections of C57BL/6 were also stained by Nissl staining at 2 μg/ml with a staining time of 50 s to label the nucleus and cytoplasm.

Figure 4(a) shows although the 2 μg/ml DAPI-stained cytoarchitecture image has bright signals, the crosstalk also has relatively bright signals, and the shape is similar to that of the immune cell, hindering the observation and identification of the labeled immune cells in the green channel. Although CECEP reduced the raw DAPI signal intensity, it also eliminated the crosstalk in the green channel [Fig. 4(b)]. Therefore, we can observe the labeled immune cells in the green channel without the risk of misrecognition. Furthermore, CECEP enhanced the SBR of the cytoarchitecture image, thereby enabling the location of the microglial cells in the whole organ.

FIG. 4.

Testing the effectiveness of CECEP applied to mouse kidney and brain images. (a) 2 μg/ml DAPI-stained cytoarchitecture image of a Cx3cr1-GFP transgenic mouse kidney section and its colocalized green signals. White arrows indicate the crosstalk signal in the green channel and the corresponding DAPI signals in the blue channel. (b) 0.1 μg/ml DAPI-stained cytoarchitecture image, the corresponding enhanced cytoarchitecture image, and colocated green signals without crosstalk from the brain section of the same Cx3cr1-GFP transgenic mouse. (c) and (d) Images of Cx3cr1-GFP transgenic mouse brain sections with and without CECEP. (e) and (f) Comparison of the performance with and without CECEP based on C57BL/6 mouse brain sections. Scale bars: 100 μm for the images and 15 μm for the enlarged views [(a)–(d)] and 50 μm for the images and 5 μm for the enlarged views [(e) and (f)].

FIG. 4.

Testing the effectiveness of CECEP applied to mouse kidney and brain images. (a) 2 μg/ml DAPI-stained cytoarchitecture image of a Cx3cr1-GFP transgenic mouse kidney section and its colocalized green signals. White arrows indicate the crosstalk signal in the green channel and the corresponding DAPI signals in the blue channel. (b) 0.1 μg/ml DAPI-stained cytoarchitecture image, the corresponding enhanced cytoarchitecture image, and colocated green signals without crosstalk from the brain section of the same Cx3cr1-GFP transgenic mouse. (c) and (d) Images of Cx3cr1-GFP transgenic mouse brain sections with and without CECEP. (e) and (f) Comparison of the performance with and without CECEP based on C57BL/6 mouse brain sections. Scale bars: 100 μm for the images and 15 μm for the enlarged views [(a)–(d)] and 50 μm for the images and 5 μm for the enlarged views [(e) and (f)].

Close modal

The same crosstalk problem also occurs when observing neural fibers. Due to the crosstalk among the DAPI signals, we can observe only the bright soma of the microglial cells but not the nerve fibers [Fig. 4(c)]. In contrast, with the CECEP network, there is no crosstalk among the DAPI signals [Fig. 4(d)]. Therefore, we can observe clear signals of the bright soma and the fine nerve fibers in the green channel. Moreover, the reconstructed high-SBR cytoarchitecture image enables us to locate the microglial cells in specific brain ROIs.

Due to the wide emission spectrum of DAPI dyes, the crosstalk among the DAPI signals can also be observed in the red signals. Figure 4(e) shows the crosstalk of DAPI almost covered the Nissl-staining signals. Therefore, the merge ROIs as indicated by the white dotted boxes in Fig. 4(e), cannot observe any unique signals of the red signals. However, in Fig. 4(f), CECEP eliminated the crosstalk in the red channel meanwhile maintaining the high SBR of the DAPI signals. Therefore, the merged ROIs can observe the DAPI and Nissl-staining signals simultaneously.

To demonstrate that the substantial improvement in the image quality achieved by CENet can effectively facilitate downstream automatic analyses, we implemented an automatic cell segmentation and recognition algorithm based on the raw low-SBR cytoarchitecture images and the CENet contrast enhancement results [Fig. 5(a)]. To prevent the algorithm optimization based on particular data, we used the Otsu and auto fill-hole plugins and the analyze-particles plugin in open-source ImageJ software for cell segmentation and cell recognition, respectively (1.53q, National Institutes of Health, USA), with default parameter settings. We also manually segmented the raw DAPI images as the GT for the cell segmentation task and annotated the cells in the raw DAPI images as the GT for the cell recognition task [Fig. 5(b)]. The raw DAPI-stained cytoarchitecture images have low contrast; therefore, the cells are difficult to distinguish from the background with accurate outlines [Fig. 5(a)]. In addition, the background noise causes many artifacts in the segmentation results. The low image quality of the DAPI images not only influences the segmentation results but also reduces the accuracy of the subsequent cell recognition results. In contrast, the CENet results show better segmentation performance with more accurate cell outlines, as shown in the enlarged views in Fig. 5(a). The higher image quality also improves the cell recognition performance. To quantify the cell segmentation performance, we randomly selected ten groups of 500 × 500-pixel images and calculated the intersection over union (IOU) based on the results of each segmentation algorithm as follows:
IOU=SGSG,
(8)
where S is the segmentation result and G denotes the ground truth, which is acquired by manual segmentation of the raw DAPI images. The IOUs of the raw DAPI images and contrast-enhanced CENet results are 0.75 and 0.92, respectively. To quantify the automatic cell recognition performance, we also randomly selected five groups of 500 × 500-pixel images and calculated their precision and recall rates based on the manual recognition GT images. Figure 5(d) illustrates the average precision and recall rates using CENet are 91% and 93%, respectively, which are higher than the 83% and 84% values of the input weak DAPI-stained signals. These results demonstrate that CENet can produce high-quality cytoarchitecture images that guarantee good performance in the subsequent automatic downstream analysis tasks.
FIG. 5.

Comparison of the raw DAPI cytoarchitecture images and the contrast-enhanced CENet results in the cell segmentation and recognition tasks. (a) The automatic analysis process of cell segmentation and recognition consists of Otsu segmentation, hole filling, and cell recognition. (b) GT with manual segmentation using the raw DAPI image. (c) IOU values of the raw and contrast-enhanced data with the GT. (d) Comparison of the precision and recall values based on the cell recognition results using five groups of raw DAPI images and their corresponding contrast-enhanced results. Scale bar: 15 μm (a).

FIG. 5.

Comparison of the raw DAPI cytoarchitecture images and the contrast-enhanced CENet results in the cell segmentation and recognition tasks. (a) The automatic analysis process of cell segmentation and recognition consists of Otsu segmentation, hole filling, and cell recognition. (b) GT with manual segmentation using the raw DAPI image. (c) IOU values of the raw and contrast-enhanced data with the GT. (d) Comparison of the precision and recall values based on the cell recognition results using five groups of raw DAPI images and their corresponding contrast-enhanced results. Scale bar: 15 μm (a).

Close modal

To further validate the contrast enhancement effect of CENet and its influence on downstream analysis tasks, we performed structure tensor analysis based on 0.1 μg/ml DAPI-stained cytoarchitecture images of a Thy1-GFP-M mouse brain tissue block imaged by a multicolor WVT system. Structure tensor analysis is widely used for quantifying local orientations in textured images29–31 and has been demonstrated to be valuable for Nissl-stained cytoarchitecture images to assess local axonal orientations and extract fine details of the axonal architecture of primate brains.32 

Figures 6(a) and 6(b) show a selected typical coronal image of the hippocampus. No crosstalk among the DAPI signals is observed in the green channel, enabling us to observe the labeled neurons in different brain ROIs. Due to the low concentration of DAPI staining, the raw DAPI signals have weak intensity and contrast [Fig. 6(b)]. Figure 6(c) shows CENet enhancing the contrast of the raw DAPI signals. We calculated the structure tensor of the raw DAPI-stained image and the corresponding CENet-enhanced image and used pseudocolor coding for better visualization. Three different ROIs were identified, as indicated by the yellow, red, and green squares in Figs. 6(a)6(c) and the enlarged views shown in Figs. 6(d)6(l). Figures 6(d)6(f) show no crosstalk among the DAPI signals in the green channel. The crosstalk-free green channel allows us to observe the GFP-labelled neuronal signals, as marked by the white arrow in Fig. 6(e). The corresponding raw DAPI-stained images have weak contrast, making it difficult to distinguish the cells from the background [Figs. 6(g)6(i)]. The structure tensor is based on the gradient calculations for each pixel and is sensitive to background noise.31,32 Therefore, the low SBR negatively affects the orientation estimation, leading to a chaotic color distribution in the visualization of the orientation of the structure tensor. In contrast, the CENet-enhanced cytoarchitecture images had cleaner backgrounds and higher contrast than the raw images, enabling more accurate gradient calculations [Figs. 6(j)6(l)]. Figures 6(j)6(l) show different directions of the cell distribution in the three ROIs represented by the red, green, and purple arrows, corresponding to structure tensor orientations of 0°, 45°, and 135°, respectively. These results demonstrate that our method not only improves the SBR of extremely weak DAPI-stained cytoarchitecture images but also effectively maintains the cell distribution characteristics to support downstream data analysis tasks, such as tensor analysis.

FIG. 6.

Structure tensor analysis of DAPI-stained cytoarchitecture images of the hippocampus coronal plane of a Thy1-GFP-M mouse brain. (a) Image of the green channel. For typographical convenience, only half of the coronal plane is shown. (b) Raw DAPI-stained image and its corresponding structure tensor using pseudocolor coding visualization. (c) CENet-enhanced DAPI-stained image and its corresponding structure tensor. (d)–(i) Enlarged views of the green and blue channel images in (a) and (b) and their corresponding color-coded structure tensors. (j)–(l) Enlarged views of the images shown in (c) and their corresponding color-coded structure tensors. Scale bars: 1 mm [(a)–(c)] and 100 μm [(d)–(l)].

FIG. 6.

Structure tensor analysis of DAPI-stained cytoarchitecture images of the hippocampus coronal plane of a Thy1-GFP-M mouse brain. (a) Image of the green channel. For typographical convenience, only half of the coronal plane is shown. (b) Raw DAPI-stained image and its corresponding structure tensor using pseudocolor coding visualization. (c) CENet-enhanced DAPI-stained image and its corresponding structure tensor. (d)–(i) Enlarged views of the green and blue channel images in (a) and (b) and their corresponding color-coded structure tensors. (j)–(l) Enlarged views of the images shown in (c) and their corresponding color-coded structure tensors. Scale bars: 1 mm [(a)–(c)] and 100 μm [(d)–(l)].

Close modal

Simultaneously acquiring two specific fluorescence-labeled neuronal structures and their colocated cytoarchitecture images without crosstalk is vital in neuroscience. To show the application potential of CECEP, we performed triple-color whole-brain imaging with a 12-week-old PV-Cre mouse (Jackson Laboratory, Bar Harbor, ME, USA) with 0.1 μg/ml DAPI counterstaining using a multicolor WVT system. For input information acquisition, input to parvalbumin (PV)-positive neurons in the infralimbic area (ILA) and prelimbic area (PL) in the prefrontal cortex in the brain were labeled with GFP and DsRed by virus injection, respectively. Then, 150 nl of Cre-dependent AAV helper virus expressing histone-tagged BFP and TVA, as well as RG (AAV-DIO-hisBFP-TVA and AAV-DIO-RG mixture at a volumetric ratio of 1:2), was injected into the anterior–posterior (AP) 2.3 mm, middle–left (ML) 0.3 mm, dorsal–ventral (DV) −2.3 mm and ILA (AP 1.9 mm, ML 0.3 mm, DV −2.6 mm). Three weeks after the virus injection, EnvA-coated rabies virus was injected into the PL, and DsRed was injected into the ILA. All viruses were obtained from BrainVTA, China. Ten days later, the mice were perfused with phosphate buffer saline (PBS) and paraformaldehyde (PFA). Finally, the brain was removed from the skull and embedded in resin for subsequent imaging.

The raw compromised cytoarchitecture images were enhanced online during synchronous multicolor imaging. Four groups of raw, ACE-enhanced, and CECEP-enhanced results of half-coronal DAPI-stained cytoarchitecture images and the corresponding manually segmented brain ROIs at 3-mm intervals are shown in Figs. 7(a)7(d). In the raw DAPI images, it is difficult to distinguish brain ROIs due to the weak contrast. Although the ACE algorithm enhanced the contrast of the raw image, the result was not clear enough for accurate brain segmentation. In contrast, in the CECEP image, we can easily observe typical brain ROIs, such as the hippocampus, as indicated by the white arrow. Enlarged views of the cortex, as indicated by the yellow dashed rectangle in Fig. 7(c), are shown in Fig. 7(e). The raw DAPI signals are too weak to distinguish the cell distribution features from the background; thus, the brain ROIs cannot be segmented. The cells in the ACE-enhanced image are more distinguishable than those in the raw image, but the amplified noise increases the difficulty of delineating boundaries based on differences in the density of the cell distribution in different brain ROIs. The CECEP method improves the contrast in the raw images, and the cell stratification is clearer, enabling the identification of six cortical layers. We also show the normalized intensity profiles along the direction of the cortex in the three images, and the results are consistent with the features of the cell distributions in the corresponding images.

FIG. 7.

Simultaneous acquisition of crosstalk-free green and red neuron signals with colocated cytoarchitecture images with CECEP. (a)–(d) Serial raw DAPI-stained cytoarchitecture images and the corresponding ACE and CECEP results, as well as manual brain region segmentation results with 3-mm intervals. (e) Enlarged views of the yellow dashed rectangles in (c) in the cortex. (f) Manual segmentation of the six cortical layers in the CECEP result. (g) 150-μm-thick maximum intensity projection of the colocated blue, green, and red half-coronal images near bregma 0.26 mm. (h) Green and red signals of a data volume with a size of 750 × 750 × 250 µm3, as indicated by the solid orange squares in (g). (i) Enlarged views of the white solid square in (g). (j) Enlarged views of the white dashed squares in (i) and their merged image. Scale bars: 1 mm [(d) and (g)], 100 μm [(e) and (i)], and 20 μm (j).

FIG. 7.

Simultaneous acquisition of crosstalk-free green and red neuron signals with colocated cytoarchitecture images with CECEP. (a)–(d) Serial raw DAPI-stained cytoarchitecture images and the corresponding ACE and CECEP results, as well as manual brain region segmentation results with 3-mm intervals. (e) Enlarged views of the yellow dashed rectangles in (c) in the cortex. (f) Manual segmentation of the six cortical layers in the CECEP result. (g) 150-μm-thick maximum intensity projection of the colocated blue, green, and red half-coronal images near bregma 0.26 mm. (h) Green and red signals of a data volume with a size of 750 × 750 × 250 µm3, as indicated by the solid orange squares in (g). (i) Enlarged views of the white solid square in (g). (j) Enlarged views of the white dashed squares in (i) and their merged image. Scale bars: 1 mm [(d) and (g)], 100 μm [(e) and (i)], and 20 μm (j).

Close modal

With the CECEP network, we simultaneously acquired crosstalk-free green- and red-labeled neurons with colocated cytoarchitecture images with high SBR, as shown in Fig. 7(g). We assess a half-coronal image of the raw DAPI signals near bregma 0.26 mm and its enhanced result, as well as the corresponding green and red signals. The green and red signals of the neurons were sparse; therefore, we chose a 150 µm maximum intensity projection at the same position for better illustration. With the CECEP network, no crosstalk was observed among the DAPI signals in the green channel. Therefore, we observed an obvious difference in the brain-wide distribution between the green- and red-labeled neurons. Furthermore, for a more intuitive 3D inspection, we selected a 750 × 750 × 250 µm3 volume of green and red channels and used the FastSME for 2D max projection33 [indicated by the solid orange square in Fig. 7(g)]. Clear and continuous neural signals without crosstalk were observed in both channels. After the elimination of the crosstalk in the green channel, we found that some of the colocated green-labeled neurons were also labeled in the red channel [Figs. 7(i) and 7(j)]. We also show a colocated neuron and corresponding cytoarchitecture in a 50 × 50 × 15 µm3 volume in Fig. S3. Hence, the CECEP network allows us to better understand the relationships among the multiple components of neural circuits by acquiring and quantifying the brain-wide distributions of neurons with anatomical annotations in the same brain.

Here, we presented a novel deep learning-based pipeline, CECEP, to simultaneously acquire crosstalk-free green and red signals with accurate anatomical annotations in the blue channel during multicolor fluorescence imaging. With the unsupervised network CENet, CECEP enables the realization of fast and high-fidelity SBR enhancement of cytoarchitecture from 1.5 to 15.0. CECEP eliminates the crosstalk of DAPI in an easy and fast way that directly reduces the concentration of DAPI staining. In contrast to other crosstalk elimination methods, CECEP does not need extra optical configuration modification. Although deep learning methods need additional hardware [graphics processing unit (GPU)], CENet is an unsupervised network that uses unpaired cytoarchitecture images, such as DAPI-stained and PI-stained cytoarchitecture images, as training data, which are commonly obtained in biology laboratories. In addition, CENet is trained in only 4 h, and the network size is ∼2 MB. Moreover, the reconstruction speed for 1800 × 1800 pixel images is 25 Hz, which makes it easy to use CENet in practice.

As for the network of CECEP, CENet not only surpasses the performance of traditional greyscale adjustment algorithms (CLAHE and ACE) in reconstruction speed and contrast enhancement but also addresses the limitations of existing unsupervised SBR enhancement networks and enables high-fidelity reconstruction. Among the existing unsupervised algorithms, CycleGAN-based networks are not applicable for images with an extremely low SBR of 1.5. CycleGAN uses weak constraints introduced by cycle consistency and is prone to generating noise artifacts and structural distortion in the output-enhanced images. Different from CycleGAN, CARE uses a self-supervised strategy and enables the imposition of strong constraints by generating semisynthetic paired training data. However, it is difficult to accurately simulate low-SBR cytoarchitecture images based on high-SBR cytoarchitecture images when the target output and input data are from different imaging systems due to the distinct noise and background distributions. In contrast to the above unsupervised and self-supervised SBR enhancement networks, in CENet, existing high-SBR cytoarchitecture images are first used to simulate low-SBR cytoarchitecture images using an unsupervised learning approach; then, the simulated low-SBR cytoarchitecture images and the existing high-SBR cytoarchitecture images are used as the paired training data for supervised learning. Therefore, CENet enables the imposition of strong constraints using pairs of generated low-SBR and real high-SBR cytoarchitecture images as training data. Moreover, the strong constraint reduces the difficulty of the transformation, which decreases the model size of CENet and enables a faster reconstruction speed than CycleGAN.

To comprehensively assess the performance of existing contrast enhancement methods, we compare these methods, including the amount of training data, the reconstruction speed, the training time, the model size, and the contrast enhancement effect, as shown in Table I. We used the same training data for the CARE, ClcyeGAN, and CENet with the data size of 11 000 × 256 × 256 pixels images. We chose the best parameter settings of each method and calculated the reconstruction speed on the 1800 × 1800 pixel images. For the contrast enhancement effect, we used the same images with an SBR of 1.40 to test each method and calculated the corresponding SBR. In summary, the CENet has the fastest reconstruction speed (25 Hz), the smallest model size (4.14 MB), and the best contrast enhancement effect (15.2 SBR) among these methods. These experiments were conducted on a workstation with a single Nvidia GeForce RTX 3090 card and an Intel(R) Xeon(R) Gold 5222 CPU.

TABLE I.

The comparison of contrast enhancement algorithms.

CLAHEACECARECycleGANCENet
Amount of training data NA NA  11 000 × 256 × 256 pixels  
Reconstruction speed 0.12 Hz 0.01 Hz 12 Hz 9 Hz 25 Hz 
Training time NA NA 3 h 6 h 4 h 
Model size NA NA 21.2 MB 43.3 MB 4.14 MB 
SBR 2.58 4.15 6.27 10.52 15.20 
CLAHEACECARECycleGANCENet
Amount of training data NA NA  11 000 × 256 × 256 pixels  
Reconstruction speed 0.12 Hz 0.01 Hz 12 Hz 9 Hz 25 Hz 
Training time NA NA 3 h 6 h 4 h 
Model size NA NA 21.2 MB 43.3 MB 4.14 MB 
SBR 2.58 4.15 6.27 10.52 15.20 

Cytoarchitecture images are widely used to produce anatomical annotations and obtain disease diagnoses, based on characteristics such as cytoarchitecture heterogeneity.34 Therefore, the authenticity of the cytoarchitecture images directly influences the widespread application of corresponding algorithms. In this study, we demonstrated the reliability of CECEP based on real data and semisynthetic data and enhanced the SBR of the cytoarchitecture images from 1.5 to 15.0. The high SBR of the cytoarchitecture image facilitates downstream analysis tasks, such as cell segmentation and recognition, structure tensor calculation, and brain region segmentation. In addition, to demonstrate the robustness of CECEP, we applied our approach to different multicolor fluorescence systems (multicolor WVT and multicolor HD-fMOST) and different types of sample organs (Thy1-GFP-M mouse brain, Cx3Cr mouse brain, kidney, C57BL/6 mouse brain). With the CECEP, we simultaneously acquired crosstalk-free signals in the green and red channels as well as high-contrast DAPI-stained cytoarchitecture images during multicolor synchronous fluorescence imaging, which is beneficial for research on the inputs and outputs of neural circuits and their relationships at the whole-brain level.

In this study, we applied CECEP in ex vivo multicolor imaging. In vivo multicolor fluorescence imaging, which is used to record fast biological processes, requires high imaging speed.35 Therefore, researchers prefer to use multicolor synchronous fluorescence microscopy systems to record the dynamic changes in each channel and use popular blue waveband fluorophores (e.g., DAPI dye and Hoechst dye36) to stain DNA in living cells, which generates crosstalk in the green signals. In addition, high concentrations of DAPI and Hoechst dyes are toxic to living cells. In this situation, CECEP can not only reduce the toxicity but also eliminate crosstalk. Thus, CECEP has the potential to be applied to in vivo multicolor fluorescence imaging in the future.

Overall, CECEP is an effective and applicable approach to eliminate crosstalk that could potentially facilitate multicolor synchronous fluorescence imaging applications in biology, such as revealing and visualizing different types of biological structures with precise locations and orientations.

See the supplementary material for comparing the different learning efficiency of the forward network and backward network in unsupervised learning, the CycleGAN structure in the first stage and the CENet structure in the second stage, and a multi-channel volume fusion of a colocated neuron and corresponding cytoarchitecture.

This work was supported by the STI2030–Major Projects (Grant No. 2021ZD0204402) and the National Natural Science Foundation of China (Grant Nos. 82102235, 62325502). The authors thanked our colleagues in the MOST group from the Britton Chance Center for Biomedical Photonics for their assistance. They also thanked the Optical Bioimaging Core Facility of WNLO-HUST for the support in data acquisition and the director fund of the WNLO.

The authors have no conflicts to disclose.

Ethics approval for experiments reported in the submitted manuscript on animal or human subjects was granted. All animal experiments followed procedures approved by the Institutional Animal Ethics Committee of Huazhong University of Science and Technology.

B.L. and Z.D. contributed equally to this work.

Bolin Lu: Data curation (equal); Methodology (equal); Writing – original draft (lead). Zhangheng Ding: Data curation (equal); Formal analysis (supporting). Kefu Ning: Formal analysis (supporting); Investigation (equal). Xiaoyu Zhang: Investigation (equal); Validation (supporting). Xiangning Li: Formal analysis (supporting); Resources (supporting). Jiangjiang Zhao: Data curation (supporting). Ruiheng Xie: Data curation (equal). Dan Shen: Investigation (equal); Validation (supporting). Jiahong Hu: Resources (supporting). Tao Jiang: Data curation (equal); Validation (supporting). Jianwei Chen: Funding acquisition (equal); Resources (supporting); Writing – review & editing (equal). Hui Gong: Funding acquisition (equal); Project administration (supporting); Resources (supporting). Jing Yuan: Conceptualization (lead); Funding acquisition (equal); Resources (supporting); Supervision (lead); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

1.
R.
Muñoz-Castañeda
,
B.
Zingg
,
K. S.
Matho
,
X.
Chen
,
Q.
Wang
,
N. N.
Foster
,
A.
Li
,
A.
Narasimhan
,
K. E.
Hirokawa
,
B.
Huo
et al, “
Cellular anatomy of the mouse primary motor cortex
,”
Nature
598
,
159
166
(
2021
).
2.
E.
Jalilian
and
S. R.
Shin
, “
Novel model of cortical–meningeal organoid co-culture system improves human cortical brain organoid cytoarchitecture
,”
Sci. Rep.
13
,
7809
(
2023
).
3.
H.
Gong
,
D.
Xu
,
J.
Yuan
,
X.
Li
,
C.
Guo
,
J.
Peng
,
Y.
Li
,
L. A.
Schwarz
,
A.
Li
,
B.
Hu
et al, “
High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level
,”
Nat. Commun.
7
,
12142
(
2016
).
4.
Q.
Guo
,
D.
Wang
,
X.
He
,
Q.
Feng
,
R.
Lin
,
F.
Xu
,
L.
Fu
, and
M.
Luo
, “
Whole-brain mapping of inputs to projection neurons and cholinergic interneurons in the dorsal striatum
,”
PLoS One
10
,
e0123381
(
2015
).
5.
C.
Zhang
,
C.
Yan
,
M.
Ren
,
A.
Li
,
T.
Quan
,
H.
Gong
, and
J.
Yuan
, “
A platform for stereological quantitative analysis of the brain-wide distribution of type-specific neurons
,”
Sci. Rep.
7
,
14334
(
2017
).
6.
X.
Li
,
B.
Zhang
,
Y.
Liang
, and
T.
Li
, “
Multiscale reconstruction of bronchus and cancer cells in human lung adenocarcinoma
,”
Biomed. Eng. Online
22
,
11
(
2023
).
7.
C.
Li
,
T.
Sun
,
Y.
Zhang
,
Y.
Gao
,
Z.
Sun
,
W.
Li
,
H.
Cheng
,
Y.
Gu
, and
N.
Abumaria
, “
A neural circuit for regulating a behavioral switch in response to prolonged uncontrollability in mice
,”
Neuron
111
,
2727
2741
(
2023
).
8.
J.
Kapuscinski
, “
DAPI: A DNA-specific fluorescent probe
,”
Biotech. Histochem.
70
,
220
233
(
1995
).
9.
F.
Otto
, “
DAPI staining of fixed cells for high-resolution flow cytometry of nuclear DNA
,” in
Methods in Cell Biology
(
Elsevier
,
1990
), pp.
105
110
.
10.
J.
Fu
,
S.
Li
,
H.
Ma
,
J.
Yang
,
G. M.
Pagnotti
,
L. M.
Brown
,
S. J.
Weiss
,
M. Y.
Mapara
, and
S.
Lentzsch
, “
The checkpoint inhibitor PD-1H/VISTA controls osteoclast-mediated multiple myeloma bone disease
,”
Nat. Commun.
14
,
4271
(
2023
).
11.
A. M.
Norris
,
A. B.
Appu
,
C. D.
Johnson
,
L. Y.
Zhou
,
D. W.
McKellar
,
M.-A.
Renault
,
D.
Hammers
,
B. D.
Cosgrove
, and
D.
Kopinke
, “
Hedgehog signaling via its ligand DHH acts as cell fate determinant during skeletal muscle regeneration
,”
Nat. Commun.
14
,
3766
(
2023
).
12.
K.
Seiriki
,
A.
Kasai
,
T.
Hashimoto
,
W.
Schulze
,
M.
Niu
,
S.
Yamaguchi
,
T.
Nakazawa
,
K.-i.
Inoue
,
S.
Uezono
,
M.
Takada
et al, “
High-speed and scalable whole-brain imaging in rodents and primates
,”
Neuron
94
,
1085
1100.e6
(
2017
).
13.
H.
Choi
,
K. R.
Castleman
, and
A. C.
Bovik
, “
Color compensation of multicolor FISH images
,”
IEEE Trans. Med. Imaging
28
,
129
136
(
2008
).
14.
Z.
Ding
,
J.
Zhao
,
T.
Luo
,
B.
Lu
,
X.
Zhang
,
S.
Chen
,
A.
Li
,
X.
Jia
,
J.
Zhang
,
W.
Chen
et al, “
Multicolor high-resolution whole-brain imaging for acquiring and comparing the brain-wide distributions of type-specific and projection-specific neurons with anatomical annotation in the same brain
,”
Front. Neurosci.
16
,
1033880
(
2022
).
15.
K.
Li
,
X.
Qi
,
Y.
Luo
,
Z.
Yao
,
X.
Zhou
, and
M.
Sun
, “
Accurate retinal vessel segmentation in color fundus images via fully attention-based networks
,”
IEEE J. Biomed. Health Inf.
25
,
2071
2081
(
2021
).
16.
L.
Dash
and
B. N.
Chatterji
, “
Adaptive contrast enhancement and de-enhancement
,”
Pattern Recognit.
24
,
289
302
(
1991
).
17.
C.
Belthangady
and
L. A.
Royer
, “
Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction
,”
Nat. Methods
16
,
1215
1225
(
2019
).
18.
M.
Weigert
,
U.
Schmidt
,
T.
Boothe
,
A.
Müller
,
A.
Dibrov
,
A.
Jain
,
B.
Wilhelm
,
D.
Schmidt
,
C.
Broaddus
,
S.
Culley
et al, “
Content-aware image restoration: Pushing the limits of fluorescence microscopy
,”
Nat. Methods
15
,
1090
1097
(
2018
).
19.
X.
Li
,
G.
Zhang
,
H.
Qiao
,
F.
Bao
,
Y.
Deng
,
J.
Wu
,
Y.
He
,
J.
Yun
,
X.
Lin
,
H.
Xie
et al, “
Unsupervised content-preserving transformation for optical microscopy
,”
Light: Sci. Appl.
10
,
44
(
2021
).
20.
X.
Zhang
,
Y.
Chen
,
K.
Ning
,
C.
Zhou
,
Y.
Han
,
H.
Gong
, and
J.
Yuan
, “
Deep learning optical-sectioning method
,”
Opt. Express
26
,
30762
30772
(
2018
).
21.
X.
Li
,
Y.
Li
,
Y.
Zhou
,
J.
Wu
,
Z.
Zhao
,
J.
Fan
,
F.
Deng
,
Z.
Wu
,
G.
Xiao
,
J.
He
et al, “
Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit
,”
Nat. Biotechnol.
41
,
282
292
(
2023
).
22.
K.
Ning
,
B.
Lu
,
X.
Wang
,
X.
Zhang
,
S.
Nie
,
T.
Jiang
,
A.
Li
,
G.
Fan
,
X.
Wang
,
Q.
Luo
,
H.
Gong
, and
J.
Yuan
, “
Deep self-learning enables fast, high-fidelity isotropic resolution restoration for volumetric fluorescence microscopy
,”
Light: Sci. Appl.
12
,
204
(
2023
).
23.
Z.
Lu
,
Y.
Liu
,
M.
Jin
,
X.
Luo
,
H.
Yue
,
Z.
Wang
,
S.
Zuo
,
Y.
Zeng
,
J.
Fan
,
Y.
Pang
,
J.
Wu
,
J.
Yang
, and
Q.
Dai
, “
Virtual-scanning light-field microscopy for robust snapshot high-resolution volumetric imaging
,”
Nat. Methods
20
,
735
746
(
2023
).
24.
C.
Qiao
,
D.
Li
,
Y.
Liu
,
S.
Zhang
,
K.
Liu
,
C.
Liu
,
Y.
Guo
,
T.
Jiang
,
C.
Fang
,
N.
Li
,
Y.
Zeng
,
K.
He
,
X.
Zhu
,
J.
Lippincott-Schwartz
,
Q.
Dai
, and
D.
Li
, “
Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes
,”
Nat. Biotechnol.
41
,
367
377
(
2023
).
25.
J.-Y.
Zhu
,
T.
Park
,
P.
Isola
, and
A. A.
Efros
, “
Unpaired image-to-image translation using cycle-consistent adversarial networks
,” in
2017 IEEE International Conference on Computer Vision
(
IEEE
,
2017
), pp.
2223
2232
.
26.
Z.
Wang
,
A. C.
Bovik
,
H. R.
Sheikh
, and
E. P.
Simoncelli
, “
Image quality assessment: From error visibility to structural similarity
,”
IEEE Trans. Image Process.
13
,
600
612
(
2004
).
27.
D. P.
Kingma
, and
J.
Ba
, “
Adam: A method for stochastic optimization
,” arXiv:1412.6980 (
2014
).
28.
Q.
Zhong
,
A.
Li
,
R.
Jin
,
D.
Zhang
,
X.
Li
,
X.
Jia
,
Z.
Ding
,
P.
Luo
,
C.
Zhou
,
C.
Jiang
et al, “
High-definition imaging using line-illumination modulation microscopy
,”
Nat. Methods
18
,
309
315
(
2021
).
29.
J.
Bigün
,
G. H.
Granlund
, and
J.
Wiklund
, “
Multidimensional orientation estimation with applications to texture analysis and optical flow
,”
IEEE Trans. Pattern Anal. Mach. Intell.
13
,
775
790
(
1991
).
30.
M. D.
Budde
and
J. A.
Frank
, “
Examining brain microstructure using structure tensor analysis of histological sections
,”
Neuroimage
63
,
1
10
(
2012
).
31.
W.
Zhang
,
J.
Fehrenbach
,
A.
Desmaison
,
V.
Lobjois
,
B.
Ducommun
, and
P.
Weiss
, “
Structure tensor based analysis of cells and nuclei organization in tissues
,”
IEEE Trans. Med. Imaging
35
,
294
306
(
2016
).
32.
R.
Schurr
and
A. A.
Mezer
, “
The glial framework reveals white matter fiber architecture in human and primate brains
,”
Science
374
,
762
767
(
2021
).
33.
S.
Basu
,
E.
Rexhepaj
,
N.
Spassky
,
A.
Genovesio
,
R. R.
Paulsen
, and
A.
Shihavuddin
, “
FastSME: Faster and smoother manifold extraction from 3D stack
,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops
(
IEEE
,
2018
), pp.
2281
2289
.
34.
C.
Pan
,
O.
Schoppe
,
A.
Parra-Damas
,
R.
Cai
,
M. I.
Todorov
,
G.
Gondi
,
B.
von Neubeck
,
N.
Böğürcü-Seidel
,
S.
Seidel
,
K.
Sleiman
et al, “
Deep learning reveals cancer metastasis and therapeutic antibody targeting in the entire body
,”
Cell
179
,
1661
1676.e19
(
2019
).
35.
L.
Kong
,
J.
Tang
, and
M.
Cui
, “
Multicolor multiphoton in vivo imaging flow cytometry
,”
Opt. Express
24
,
6126
6135
(
2016
).
36.
J.
Bucevičius
,
G.
Lukinavičius
, and
R.
Gerasimaitė
, “
The use of Hoechst dyes for DNA staining and beyond
,”
Chemosensors
6
,
18
(
2018
).