We converted a solid-glass cannula into a high-resolution widefield fluorescence microscope. Calibrating the space-variant point-spread functions of the cannula and applying a nonlinear optimization algorithm to reconstruct object details enable this development. The resolution of our system is ∼1 μm, and fluorophore position is determined to a precision of ∼20 nm. Images of microglia from fixed slices of mouse brains at various post-natal development stages were also obtained.

Light-based microscopes reproduce an image from the focal plane to a conjugate plane on a camera face. Although long working distance objectives can collect light from a focal plane that is several millimeters into a sample, the depth of penetration is limited. The quality of the image is diminished because first, the light capturing ability of the lens is compromised by distance and second, because light is scattered by the tissue itself. Finally, imaging details deeper in the sample is not possible simply because the bulky objective cannot penetrate the sample. Cannulas can be inserted into tissue to collect signals from deep inside tissue, but the light emerging from the end of the cannula is not in a recognizable pattern because the walls of the cannula reflect it. Specifically, the cannula produces space-variant point-spread functions (PSFs). Here, we demonstrate Computational-Cannula Microscopy (CCM) that utilizes these space-variant PSFs and applies an optimization algorithm to reconstruct object details with 1 μm resolution. The temporal resolution of our microscope is limited by the frame rate of the camera. We present microscopic images of fluorescent microbeads and of microglia in brain slices obtained at different post-natal developmental stages.

Alternatives to CCM include multi-photon microscopy, which is useful to image at depths of ∼1 mm but not much deeper.1 It requires high-power lasers and achievable resolution is limited due to the long wavelengths. Another alternative is to use fiber-bundle-based microendoscopy, where each single-mode fiber in the bundle composes one pixel of the image.2 However, resolution is limited by the spacing between the cores. GRIN-lens-based miniaturized microscopes may also be used.3 But, the GRIN-lens requires a minimum diameter of ∼350 μm. Imaging through multi-mode fibers (diameter = 50 μm to 200 μm) can overcome this limitation.4–6 In this case, wavefront correction using spatial-light modulators is required to compensate for phase dispersion within these multi-mode fibers.7–9 This necessitates coherent photons and hence excludes fluorescence imaging. Image transport via the Anderson localization effect is also possible, but only at limited resolution.10 Recently, imaging through a scattering medium by using the memory effect was demonstrated.11,12 This technique relies on coherent photon interactions and is not readily applicable to fluorescence microscopy. In contrast to these techniques, CCM allows for an elegant and inexpensive option for high-resolution microscopy deep inside tissue.

Previously, we demonstrated the imaging of non-fluorescent, non-biological incoherent objects using a cannula and direct-binary-search (DBS)-based reconstruction.13 In a conventional imaging system, each point in an image corresponds to a unique point in the object, with a size that is a fraction of the captured frame. Ideally, image points corresponding to distinct object points do not overlap, that is, there is a one-to-one map between the object and the image. However, this is not strictly necessary. Each point on an object can be mapped onto many points on the image, spanning the entire frame, which is captured on a sensor. In other words, signal originating from distinct object points may overlap in the image. As long as the intensity distribution formed on the sensor is unique for each object point and can be well characterized, one can apply computation to recover the object information. In a cannula, this is achieved by multiple reflections of rays as they pass from a point source at one end to the other. If the point is slightly displaced, the intensity pattern at the other end of the cannula is changed. As a first step, we calibrate the intensity pattern at the output of the cannula as a function of the position of the point source at the input. Since any incoherent object is a linear combination of point sources, one can apply a variety of algorithms to extract the object information from the scrambled image. Previously, we also showed that 3D imaging in a volume in the vicinity of the cannula is feasible. Here, we use CCM to demonstrate 2D widefield fluorescence microscopy of biological samples. Compared to our previous paper, the current article reports four main advancements: (1) Localization precision of ∼20 nm with fluorescent spheres; (2) resolution of ∼1 μm, which is close to the diffraction limit of our system; (3) fluorescent images from brain-slice samples of biological significance; and (4) careful analysis of the limitations of CCM based upon correlation theory and the impact of signal sparsity and complexity.

A schematic of our experimental setup for CCM is shown within the dashed box in Fig. 1(a). Fluorescence from the sample is collected by a cannula (Thorlabs CFM12L02, 225 μm outer diameter, 17 μm length), which is placed in close proximity to the sample as shown. The glass cannula used in the experiment is cut out of a regular multimode fiber and held by an aluminum ferrule. A lens is used to image the intensity distribution on the right-end of the cannula onto a CCD camera (Andor Clara). The cannula image formed by a sample of fluorescent microspheres (FluroSphere Orange, diameter = 1 μm) is shown in Fig. 1(b). We also built a conventional widefield microscope (left of sample in Fig. 1(a)) to serve as a reference to CCM. The excitation beam (λ = 532 nm) was focused using an objective (Olympus UMPlanFL 10×). Both the reference microscope as well CCM collected fluorescence from the sample. Filters were used in both optical paths to remove the excitation photons from the fluorescence signal. Note that the lens used to image the cannula face onto the camera is not strictly necessary. In a simpler configuration, we can place the cannula up against the sensor array of the camera. In this case, we will have truly lensless CCM. The field-of-view of CCM is generally limited by the core-diameter of the cannula. However, for simplicity in our implementation, the excitation spot limited the effective field-of-view. In the future for reflection-mode imaging, the excitation beam may be transmitted through the cannula as well. In this case, care must be taken to ensure that the excitation region is uniform.

FIG. 1.

(a) Schematic of the experimental setup. The dashed box delineates the CCM. Fluorescence (red) from the sample propagates through the cannula (225 μm outer diameter and 17 mm length). The intensity pattern at the far-end of the cannula is imaged onto the camera to form the cannula image. A conventional widefield microscope is shown on the left that is used as a reference. The sample is excited (green) via the objective in the reference microscope. Example images of fluorescent microspheres are shown in the second row. (b) Cannula image captured on the camera for CCM. Image (b) is then processed to recover the image (c), which is then convolved with the known microsphere diameter to produce image (d). Scale bar in (b) is 10 μm. The corresponding reference image is shown in (e). Scale bar in (e) is 40 μm and is the same for (c) and (d).

FIG. 1.

(a) Schematic of the experimental setup. The dashed box delineates the CCM. Fluorescence (red) from the sample propagates through the cannula (225 μm outer diameter and 17 mm length). The intensity pattern at the far-end of the cannula is imaged onto the camera to form the cannula image. A conventional widefield microscope is shown on the left that is used as a reference. The sample is excited (green) via the objective in the reference microscope. Example images of fluorescent microspheres are shown in the second row. (b) Cannula image captured on the camera for CCM. Image (b) is then processed to recover the image (c), which is then convolved with the known microsphere diameter to produce image (d). Scale bar in (b) is 10 μm. The corresponding reference image is shown in (e). Scale bar in (e) is 40 μm and is the same for (c) and (d).

Close modal

The first step to enable the cannula as a microscope is to generate the PSF calibration data. We achieve this by scanning one fluorescent microsphere in the XY plane, while capturing the corresponding image for each position on the CCM camera. Note that each such image is a PSF of the cannula. However, unlike in conventional ideal optical systems, our PSF is space-variant. In other words, the shape of the PSF changes with the absolute location of the point source within the field-of-view. An image is captured at every 500 nm step of the sample across a field of 50 μm × 50 μm. This forms a lookup table comprising 10,000 images. Any fluorescent sample can be considered as a linear superposition of point sources. Since the image transfer process through the cannula is linear, the intensity distribution formed at the far-end of the cannula by any fluorescent sample can be considered a linear superposition of the intensity distributions formed by each point source, that is, any fluorescent sample can be expressed as a linear combination of the images collected during calibration. For example, the image formed on the CCM camera of the collection of fluorescent microspheres is shown in Fig. 1(b).

In the next step, we apply a modified DBS algorithm on the captured image (Fig. 1(b)) to recover the details of the object (Fig. 1(c)) (supplementary material14). A further convolution with the known diameter of the microsphere allows us to plot the image shown in Fig. 1(d). We can see that CCM is able to produce images that are in excellent agreement with the conventional microscope (Fig. 1(e)).

When imaging sparse objects such as the microsphere sample shown in Fig. 1, one can apply localization techniques to further enhance the position information of each fluorophore. In this example, the same raw pixel data from Fig. 1 are used, and pixels are grouped into multiple connected geometries. Then, the weighted centroid of each connected geometry is found and labeled onto the image as shown in Fig. 2(a) (red X markers). The centroid of each connected component is then convolved with a 1 μm-diameter disk, producing a high-contrast microsphere image shown in Fig. 2(b).

FIG. 2.

Localization-precision and resolution of CCM. (a) Red crosses showing centroids of fluorophores overlaid on the raw image (same as Fig. 1(c)). (b) Reconstructed image of the same object using 1 μm-disk convolution. Scale bar is 10 μm. Experimental results of sensitivity analysis showing reconstructed displacement as a function of actual stage displacement along the (c) X- and (d) Y-axes. (e) Localization of a single fixed fluorophore 100 times reveals a standard deviation of 21.7 nm in X and 10.4 nm in Y. (f) Fluorescent microspheres showing a resolution of ∼1 μm. The cannula image is on the left and the corresponding conventional widefield image is on the right.

FIG. 2.

Localization-precision and resolution of CCM. (a) Red crosses showing centroids of fluorophores overlaid on the raw image (same as Fig. 1(c)). (b) Reconstructed image of the same object using 1 μm-disk convolution. Scale bar is 10 μm. Experimental results of sensitivity analysis showing reconstructed displacement as a function of actual stage displacement along the (c) X- and (d) Y-axes. (e) Localization of a single fixed fluorophore 100 times reveals a standard deviation of 21.7 nm in X and 10.4 nm in Y. (f) Fluorescent microspheres showing a resolution of ∼1 μm. The cannula image is on the left and the corresponding conventional widefield image is on the right.

Close modal

To determine the precision of our localizations, we moved a single fluorescent bead 10 times in 100 nm steps (the smallest step allowed by the motorized stage) along both X and Y directions, and the image at each location was reconstructed. The location of the microsphere was determined using the weighed centroid. The estimated displacement of the centroid was compared with the stage displacement as plotted in Fig. 2(c) for X and Fig. 2(d) for Y. The sample measurement was repeated 10 times in order to average any random displacement errors due to the stage. The results indicate that we are able to sense position changes as small as 100 nm with an average standard deviation of 30 nm along the X-axis and 52 nm along the Y-axis (supplementary material14). Thus, CCM is able to localize sparse fluorophores to a precision that is smaller than the calibration step size (500 nm). The fit lines of the estimated and intended steps along both axes show excellent linearity with slope of 1 along X-axis and 1.15 along Y-axis.

To determine the variance in our localization, we imaged and reconstructed the image of a single fluorescent microsphere 100 times. Location of the microsphere was found for all 100 frames (exposure time per frame was 3 s) and a scatter plot in Fig. 2(e) graphs the 100 locations found relative to the mean values of x and y. Calculated standard deviation along X-axis was 21.7 nm and along Y-axis was 10.4 nm. In this case, since we did not move the stage, the localization precision is only determined by the details of the space-variant PSFs.

Resolution is defined as the smallest spacing between two points that can be distinguished by the microscope. In Fig. 2(f) (left panel), we imaged a field of 3 fluorescent microspheres and confirmed that the resolution of CCM is ∼1 μm.14 This is consistent with the expected resolution (∼0.93 μm) determined by the emission wavelength (532 nm) and the objective numerical aperture (NA = 0.3). Note that the NA of the cannula is 0.39, so the objective in front of the CCM Camera essentially limits the overall NA in this preliminary implementation. The widefield image (right panel) verified that our cannula image is accurate.

In order to demonstrate the applicability of CCM, we next imaged multiple biological samples. Figure 3 (left panels) show cannula images of Hoxb8 microglia driven by Rosa-Tdtomato in 20 μm-thick brain slice sections. We chose microglia as they are dimensionally smaller than neurons (10–15 μm), critically involved in injuries and diseases and continuously monitor the brain microenvironment with high motility. Further, no simple imaging techniques are available to monitor microglia and to reconstruct their properties under their natural environment or during brain-injury conditions. To establish the principle of CCM, we imaged microglia in fixed brain slices. The right panels in Fig. 3 show the corresponding images acquired using the reference (conventional widefield) microscope, while the left panels show the CCM images. Samples were obtained at various postnatal stages (P0, P4, P8, P16, and P32). At later postnatal stages, the complexity of microglial projections increases significantly (Figs. 3(e)–3(i) and Fig. S10 in supplementary material14). Clearly, the agreement between the CCM images and the conventional images are excellent. CCM images the primary, secondary, and tertiary projections of Hoxb8 microglia and reconstructs fine details of the complex projections. Some loss of signal occurs, since our current implementation of CCM rejects out of focus fluorescence (which nevertheless increases image contrast). As with most computational-imaging problems, CCM is particularly useful for imaging sparse samples (see Fig. S8 in supplementary material14). As the density and complexity of the target geometry increase, CCM is expected to perform worse. However, our experimental results with all postnatal-stage microglia and simulation analyses in the supplementary material14 suggest that CCM is sufficient for many biologically complex samples. Note that in Figs. 3(e)–3(i), we stitched together multiple reconstructed cannula images to increase the effective field-of-view.14 

FIG. 3.

Exemplary images from mouse-brain slices. In each panel, the cannula-image is on the left and the corresponding widefield image is on the right. Hoxb8 microglial cells expressing RosaTdTomato reporter in brain slices obtained from postnatal day: (a)–(c) 0, (d)–(f) 4, (g) and (h) 8, and (i) 16 developmental stage of mouse brain. Early development stage microglia show a ∼10 μm cell body and projections extending in multiple directions. Scale bars in (a)–(h) are 10 μm and that in (i) is 20 μm.

FIG. 3.

Exemplary images from mouse-brain slices. In each panel, the cannula-image is on the left and the corresponding widefield image is on the right. Hoxb8 microglial cells expressing RosaTdTomato reporter in brain slices obtained from postnatal day: (a)–(c) 0, (d)–(f) 4, (g) and (h) 8, and (i) 16 developmental stage of mouse brain. Early development stage microglia show a ∼10 μm cell body and projections extending in multiple directions. Scale bars in (a)–(h) are 10 μm and that in (i) is 20 μm.

Close modal

The quality of the cannula images is limited primarily by the signal-to-noise ratio of the acquired images. In the samples studied in Fig. 3, autofluorescence from unlabeled parts of the sample seems to be the limiting factor. Nevertheless, careful numerical analysis indicates that the performance of our direct-binary-search algorithm for image reconstruction is tolerant to noise compared to linear matrix inversion.14 The analysis reveals that the minimum required signal-to-noise ratio to properly reconstruct an image is inversely proportional to the sparsity of the target object. Nevertheless, the simulations reveal that our reconstruction should work even for complex biological samples as long as the signal-to-noise ratio is reasonably high. This is true even when the pattern complexity (and density) is high as discussed in Sec. VI of the supplementary material. There is an analogy between CCM and the field of compressed sensing.15 The deconstruction of a complex object into a linear combination of space-variant PSFs is analogous to the set of basis functions (typically sparse) that is used in compressed sensing. Therefore, highly effective algorithms (particularly ones that rely on a priori information) can be readily applied to CCM as well.

Here, we demonstrated CCM, which utilizes a rigid glass cannula for imaging by computationally reversing the image distortion that naturally happens as light propagates through the cannula. CCM can be readily applied to image deep inside tissue, since the cannula can be surgically inserted. In our current implementation, CCM is limited to 2D samples. However, previously we have shown that related techniques can be applied to 3D microscopy.13 Although our approach utilized a cannula, the technique can be extended to any system that can generate space-variant point-spread functions and where those functions can be accurately calibrated.

We thank Manassa Gudheti, Carl G. Ebeling, and Sean Merrill for assistance with preparation of the samples. We also thank Erik Jorgensen and Amihai Meiri for comments on the manuscript. G.H. was supported by a Solzbacher and Chen graduate fellowship. R.M. acknowledges funding from NSF Award No. 10030539 and the Utah Science Technology and Research (USTAR) initiative. NN and MRC acknowledge funding from NIH RO1 grant MH093595.

1.
N. G.
Horton
,
K.
Wang
,
D.
Kobat
,
C. G.
Clark
,
F. W.
Wise
,
C. B.
Schaffer
, and
C.
Xu
,
Nat. Photonics
7
,
205
209
(
2013
).
2.
A.
Shahmoon
,
S.
Aharon
,
O.
Kruchik
,
M.
Hohmann
,
H.
Slovin
,
A.
Doupilk
, and
Z.
Zalevskey
,
Sci. Rep.
3
,
1805
(
2013
).
3.
K. K.
Ghosh
,
L. D.
Burns
,
E. D.
Cocker
,
A.
Nimmerjahn
,
Y.
Ziv
,
A. E.
Gamal
, and
M. J.
Schnitzer
,
Nat. Methods
8
,
871
878
(
2011
).
4.
J.
Kim
,
W.
Lee
,
P.
Kim
,
M.
Choi
,
K.
Jung
,
S.
Kim
, and
S.
Yun
,
Nat. Protoc.
7
,
1456
1469
(
2012
).
5.
B. A.
Flusberg
,
E. D.
Cocker
,
W.
Piyawattanametha
,
J. C.
Jung
,
E.
Cheung
, and
M. J.
Schnitzer
,
Nat. Methods
2
,
941
950
(
2005
).
6.
R. N.
Mahalati
,
R.
Gu
, and
J. M.
Kahn
,
Opt. Express
21
,
1656
1668
(
2013
).
7.
T.
Čižmár
and
K.
Dholakia
,
Nat. Commun.
3
,
1027
(
2012
).
8.
I. N.
Papadopoulos
,
S.
Farahi
,
C.
Moser
, and
D.
Psaltis
,
Opt. Express
20
(
10
),
10583
10590
(
2012
).
9.
Y.
Choi
,
C.
Yoon
,
M.
Kim
,
T. D.
Yang
,
C.
Fang-Yen
,
R. R.
Dasari
,
K.
Lee
, and
W.
Choi
,
Phys. Rev. Lett.
109
,
203901
(
2012
).
10.
S.
Karbasi
,
R. J.
Frazier
,
K. W.
Koch
,
T.
Hawkins
,
J.
Ballato
, and
A.
Mafi
,
Nat. Commun.
5
,
3362
(
2013
).
11.
J.
Bertolotti
,
E. G.
van Putten
,
C.
Blum
,
A.
Lagendijk
,
W. L.
Vos
, and
A. P.
Mosk
,
Nature
491
,
232
234
(
2012
).
12.
O.
Katz
,
P.
Heidmann
,
M.
Fink
, and
S.
Gigan
,
Nat. Photonics
8
,
784
790
(
2014
).
13.
G.
Kim
and
R.
Menon
,
Appl. Phys. Lett.
105
,
061114
(
2014
).
14.
See supplementary material at http://dx.doi.org/10.1063/1.4923402 for detailed description of the experiments and supporting data.
15.
V. M.
Patel
and
R.
Chellappa
,
Sparse Representations and Compressed Sensing for Imaging and Vision
(
Springer Science & Business Media
,
New York
,
2013
).

Supplementary Material