We present a comprehensive guide to light-sheet microscopy (LSM) to assist scientists in navigating the practical implementation of this microscopy technique. Emphasizing the applicability of LSM to image both static microscale and nanoscale features, as well as diffusion dynamics, we present the fundamental concepts of microscopy, progressing through beam profile considerations, to image reconstruction. We outline key practical decisions in constructing a home-built system and provide insight into the alignment and calibration processes. We briefly discuss the conditions necessary for constructing a continuous 3D image and introduce our home-built code for data analysis. By providing this guide, we aim to alleviate the challenges associated with designing and constructing LSM systems and offer scientists new to LSM a valuable resource in navigating this complex field.
I. INTRODUCTION
Fluorescence microscopy techniques offer non-invasive means to study biological samples in situ. An ideal fluorescence microscope should possess high sensitivity, signal-to-background ratio (SBR), and spatial resolution, along with fast acquisition speeds and minimal photodamage. A sensitive microscope typically exhibits lower background intensity relative to the sample emission, allowing for the detection of weaker signals amidst background noise. Fluorescence microscopy inherently benefits from high contrast imaging due to the substantial quantum yield of fluorophores, enhancing image quality and enabling precise visualization of low-abundance molecules or events.1–3 High spatial resolution is crucial for resolving fine structural details, particularly at the subcellular level. Fast image acquisition with low source intensity, facilitated by using conventional detectors like fast frame rate cameras, can help mitigate photodamage. Being able to collect at higher frame rates also improves the resolvable timescales for more dynamics-based analysis.
Epifluorescence, also known as wide-field, microscopy is the original fluorescence microscopy technique developed in the early 1900s and is a common method used for biological imaging.4–7 In epifluorescence microscopy, an area of the order of 100 m –10 mm of the sample is illuminated along the entire axial detection axis. Consequentially, the extensive excitation causes fluorophores within the sample both above and below the focal plane to emit, resulting in out-of-focus signal.8,9 The broad axial excitation with little selectivity leads to limitations such as a lower contrast image, a limited optical sectioning capability, and an increase in photodamage proportional to the number of planes imaged.10–12 However, an advantage to using epifluorescence microscopy is the ability to increase the field of view (FOV) in a given timescale when measuring sample dynamics. Since epifluorescence excites the entire sample at once, it is possible to fully utilize the fast frame rates of the chosen detector without the need to scan across the sample to generate an image.
Laser scanning confocal microscopy (LSCM), first reported in the mid-1900s, was developed to improve upon the image resolution of epifluorescence microscopy and is also popular for biological samples.13–15 Unlike epifluorescence microscopy, which has a focal plane, LSCM has a focal volume, defined by the beam waist of the focused point-like beam and the aperture of a pinhole along the illumination path.4,16 By varying the pinhole size, it is possible to control the background signal and photodamage.17 The incorporation of a pinhole along the detection path blocks out-of-focus light, thus improving the axial resolution, optical sectioning, and image contrast.10,11 However, while the lateral area illuminated in LSCM is significantly smaller (0.1–1 m ) than in epifluorescence microscopy, the sample is still illuminated along the entire detection axis, risking photodamage. Another limitation of LSCM is the timescale for image acquisition. Since LSCM uses a point-like excitation, the beam must be scanned across the sample, requiring multiple scanning iterations to construct an image. Thus, imaging a single plane is a relatively slow process at approximately 1 frame every second.18 Moreover, since there is a temporal variation between each point in the image, LSCM is not viable for some dynamic samples. Considering the constraints of both epifluorescence and scanning confocal microscopy, including limited 3D sectioning capabilities, potential for inducing sample photodamage, and susceptibility to detecting excess signal beyond the focal plane, it becomes imperative to seek out and adopt an enhanced imaging technique for more precise and comprehensive analysis.
Light-sheet microscopy (LSM) has gained acclamation in biological imaging within the past few decades due to the selectivity of the illuminating focal plane, reducing photodamage, and improving SBR (Fig. 1).2,3,19,20 The selective illumination is due in part to the thin focal volume of the sheet with beam thicknesses reaching as small as 400 nm21–23 in specialized setups such as lattice LSM while more conventional setups afford thicknesses of a few m.24,25 The ability of LSM to capture entire wide fields of view simultaneously enhances image acquisition speed, making it suitable for dynamic imaging experiments over extended periods such as monitoring zebrafish embryo development,26,27 cell migration,28,29 or neuronal activity30,31 where the timescales range from ms to hours. In addition to these dynamic samples, LSM has also been used to characterize structural features at high resolution within samples ranging from zebrafish32,33 to single cells.34,35 While this is an impressive display of imaging, all listed example analytes have features of the order of m and, therefore, only require m resolutions. As such, a 3D reconstruction of most biological samples can be acquired using commercially available LSM setups. Techniques such as flow cytometry36,37 and two-photon excitation38,39 are well suited for diffraction-limited levels of analysis.
LSMs with super-resolution capabilities40 are necessary for imaging nanoscale features at or below the Abbe diffraction limit41 of visible light ( nm) or dynamic samples at timescales faster than a few ms. Recent advancements in super-resolution LSM include the incorporation of techniques such as single-molecule localization,42,43 structured illumination microscopy,44,45 and correlation-based microscopy.46,47 Generally, to achieve super-resolution, a more specialized system is needed, which will require a custom setup to be constructed. Designing such systems requires careful consideration of optical components and alignment processes. In our guide, we detail considerations for building a home-built LSM for imaging non-traditional systems at nanoscales. We discuss optical component selection, alignment procedures, and our design choices for measuring biomolecule diffusion dynamics within an extracellular matrix (ECM) analog, covering alignment, calibration, data acquisition, and 3D reconstruction processes.
II. OPTICAL HARDWARE COMPONENTS TO CONSIDER PRIOR TO MICROSCOPE CONSTRUCTION
Each light-sheet setup is unique, tailored to both the chosen sample and the desired spatiotemporal resolutions. When beginning to consider the configuration of the microscope, first consider the characteristics of the sample, as they influence subsequent component choices. For instance, if the sample requires a specific orientation, this influences objective selection and the design of the sample chamber. Imaging zebrafish necessitates adherence to a substrate to minimize movement while for smaller entities like cells, immersion in a suitable buffer solution and substrate adherence are essential. Dynamic data collection, such as protein diffusion dynamics, imposes distinct experimental requirements, notably in resolving diffusion timescales (10–100 m s ), compared to static samples. Regardless of the sample, identifying necessary experimental conditions prior to construction is crucial. In this section, we outline hardware considerations encompassing laser properties, sheet dimensions, objective geometry, sample immersion and orientation, and camera sensitivity and timescales.
A. Beam profiles that have been used in LSM
Three beam profiles— Gaussian,23,30 Bessel,24,48 and Airy49,50—have proven effective in LSM (Fig. 2). Each offers distinct advantages, although no single profile suits every experiment perfectly. The primary considerations for profile selection are the desired axial resolution and lateral FOV. Here, we will explore the characteristics, advantages, and drawbacks of these beam profiles for LSM setups. Notably, we will omit the discussion of less conventional options like lattice profiles, which entail complexities to integration into specialized LSM setups, requiring custom objectives and programmed coordination between electronically-controlled mirrors, objectives, and the sample stage.22,51–53 Commercial lattice LSM systems are available to address such needs.54–56 Among the three discussed profiles, Gaussian profiles are the simplest, with Bessel and Airy profiles typically formed from Gaussian beams. While the intensity profile of a Gaussian beam can be solved analytically in all three dimensions, such analysis is not straightforward for Bessel and Airy profiles despite their derivation from Gaussian beams. Thus, we simplify the analysis by considering two dimensions for Bessel and Airy intensity profiles.
1. Gaussian
Gaussian beams with narrow waists offer higher axial resolution but have smaller propagation lengths along the illumination axis. While the sectioning will be better in these systems, it will inevitably limit the possible FOV when imaging. Conversely, wider beam waists sacrifice axial resolution for increased propagation lengths and FOVs.24,58 Additionally, Gaussian beams encounter scattering and absorption in biological samples, resulting in irregular patterns like shadows and streaks as light propagates through tissues.59–61 The penetration depth inside tissues is, thus, decreased, making imaging the entire volume more challenging for thicker samples.
2. Bessel
In addition to maintaining axial resolution while propagating over longer distances [Fig. 2(e)], Bessel beams penetrate scattering media more deeply compared to Gaussian beams. Bessel beams possess the ability to “self-heal” in scattering tissue, allowing for increased penetration depth and retention of beam shape.24,58,63,74 This difference in penetration depth was demonstrated by Purnapatra et al. by imaging fluorescent polymer-coated yeast cells suspended in a tissue-like gel.75 From Eq. (6), the confocal parameter of a Bessel beam depends on the beam waist of the Gaussian beam. To achieve the same confocal parameter, the beam waists of the input Gaussian beam are different and based on the chosen optical components. Thus, the penetration of each beam will also differ. In practical use, the Bessel beam penetrates to approximately 650 m compared to the limit of the Gaussian beam of around 200 m. However, it is worth noting that the overall image contrast collected with a Bessel beam may be reduced compared to a Gaussian beam due to the multi-lobe nature of a Bessel beam, potentially resulting in out-of-focus excitation.76,77
3. Airy
Similar to Bessel beams, Airy beams maintain comparable axial resolution and exhibit “self-healing,” enabling increased maximum depth of penetration compared to Gaussian beams.58,82–84 Nylk et al. observed that Airy beams could penetrate up to approximately 330 m into mouse brain tissue, whereas Gaussian beams reached only about 250 m with the same beam waists of approximately 1.2 m.85 However, the depth achieved with Airy beams, although significant, is still less than the approximate 650 m penetration depth reported for Bessel beams by Purnapatra et al.75,85 Additionally, Airy beams may yield lower contrast images due to their multi-lobe intensity structure as well as necessitate the use of deconvolution methods to correct for asymmetric sidelobe structures.49,76,77
B. Sheet dimensions determine the microscope sectioning capabilities
In this section, we take a close look at the important considerations when gathering components for constructing a microscope, ranging from optical parts to the practicalities of assembly. Additionally, we describe some calculations to help estimate the dimensions of the light-sheet based on the planned optical path. These calculations are crucial for optimizing a LSM setup and ensuring it meets the desired imaging needs.
1. Static or dynamic sheets can be formed
Light-sheets are created through the manipulation of Gaussian, Bessel, or Airy beams.3,31,86 As depicted in Fig. 3, cylindrical lenses focus the excitation beam into a light-sheet in selective-plane illumination microscopy (SPIM),19,61 while in digitally scanned laser light-sheet microscopy (DSLM), a virtual light-sheet is formed by rapidly moving a focused beam at the focal plane of the detection lens.87,88
Cylindrical lenses produce an anisotropic light-sheet. Unlike spherical lenses [Fig. 4(a)], which interact with light symmetrically [Fig. 4(c)], cylindrical lenses [Fig. 4(b)] converge or diverge light asymmetrically depending on orientation [Fig. 4(d)].89,90 One cylindrical lens can generate a triangular sheet along the optical axis, while a pair of either concave or convex lenses oriented orthogonally are needed to produce a diverging rectangular light-sheet.91 In general, SPIM systems are less expensive and simpler to construct than their DSLM counterparts. However, SPIM light-sheets may exhibit intensity variation based on the beam profile and detection objective FOV. Additionally, the detection path may collect some out-of-focus light depending on the sheet thickness.
A virtual light-sheet is generated by swiftly scanning an isotropic beam laterally to simulate a planar light-sheet. This method in DSLM requires minimal prior manipulation, resulting in fewer optical aberrations.87 An f lens is employed to convert the scan range of the beam into a vertical array of parallel beams.88 The control of the scanning range theoretically enables variation in sheet thickness. Additionally, DSLM ensures uniform illumination across the sample sections, unlike SPIM, but also increases the overall laser power necessary to provide a comparable signal as the beam becomes essentially time-shared.3,87 Moreover, the need for rapid beam movement entails the use of electronic scanning mirrors, escalating system cost and complexity. However, the scanning can be synchronized with a “rolling shutter” exposure mode (Sec. II E), effectively functioning as a digital confocal slit to eliminate out-of-focus scattered light, enhancing image contrast and depth.57
2. Theoretical calculations to determine the sheet thickness
C. Objective lenses will dictate the image quality and LSM geometry
1. Immersion media is dependent on the sample
The main immersion media for light microscopy are oil, air, and water. The application of each media to samples, such as zebrafish, cells, and diffusing proteins, is discussed here considering what conditions will necessitate the use of each immersion medium. The NA of an objective is also influenced by the refractive index of the medium [Eq. (12)]. A comparison of NAs for commercially available objectives with each immersion medium is presented in Table I.
Oil, with its high refractive index ( ),101,102 is often used in super-resolution imaging due to its compatibility with high NA objectives. This medium is commonly employed in inverted microscopes with samples mounted on glass with a similar refractive index (1.518).103–105 Oil immersion objectives require direct objective/oil and oil/glass interfaces, limiting usage primarily to one-objective setups,106,107 making oil suitable for all three experimental examples.
Air offers versatility in objective orientation but has the lowest refractive index (1.00), resulting in lower NA objectives and limited application in nanoscale super-resolution tasks, such as sub-cellular imaging or tracking protein diffusion. Samples with larger micrometer to millimeter features, such as zebrafish,108 are well-suited for air objectives. Although not commonly used in LSM for biophysical imaging, given samples are commonly in aqueous environments, air objectives are simpler to implement if super-resolution is not required. If air objectives are used, they are regularly paired with another immersion medium.109,110
Water, while less versatile than air, provides a more adaptable geometry than oil. Proper index matching still necessitates an aqueous medium, but water objectives are not restricted to one-objective setups. With a refractive index of 1.33, water objectives offer a decent range of NAs and are either water immersion or water dipping. Water immersion objectives are designed to be used with a drop of water placed between a glass coverslip and the objective lens. Water dipping objectives do not use a coverslip and instead are meant to have the front lens either dipped or submerged into the sample medium itself. Water objectives can utilize various aqueous media, such as buffer solutions, offering flexibility. Given that both single cells and diffusing proteins typically require a buffer solution, water serves as an ideal medium for these experiments.111,112 Overall, water immersion is a compromise between the versatility of air and the higher resolution capabilities of oil.
2. Objective orientation is dependent on immersion media
In most LSM setups, the objective geometry consists of two separate optical paths, resulting in an uncoupled illumination and detection. Typically, one objective forms the light-sheet while the other collects the emitted light for detection [Figs. 5(a)–5(e)]. It is crucial to consider the WD of both objectives to ensure the confocal parameter [Eq. (4)] of the illumination objective aligns with the focal plane of the detection objective. Although using identical objectives facilitates objective housing and WD matching,23,29 objectives with different magnifications, NAs, and/or WDs can still be utilized.87,113 The choice depends partly on sample size and the physical gap between the objectives, which is often limited, requiring both objectives be designed for the same immersion media, such as air/air or water/water. However, some geometries do allow for air/water,114,115 air/oil,116 or oil/water117 mismatches in media. It is also worth noting that two-objective orientations are not typically designed to work with oil immersion media, with some exceptions mentioned later in this section.
Some LSMs feature a single optical path where illumination and detection are coupled through a single objective [Figs. 5(f) and 5(g)].98,106,118,119 These one-objective configurations are more conventional and easily integrated into standard inverted microscope bodies. Since there is only one objective, immersion media and WD matching are not concerns, allowing for the use of any immersion media when designing a one-objective LSM system. Less common LSM setups use three or more optical paths,60,61,120,121 but these are beyond the scope of this tutorial. Here, we present one- and two-objective orientations, starting with the more complex two-objective options.
The standard light-sheet configuration, L-SPIM, features orthogonal optical paths, with one objective forming the light-sheet and the other collecting emitted light [Fig. 5(a)]. First reported in the early 1900s, L-SPIM is the oldest two-objective setup.125 Although originally the sheet was formed with just a cylindrical lens in the illumination path,19,99,126 modern setups often include an objective after the cylindrical lens to further focus the light-sheet.127–129 L-SPIM is the simplest geometry and is relatively easy to incorporate into an inverted microscope body, with either the detection or illumination objective being freely mounted above the sample stage. This orientation is suitable for analysis ranging from monitoring the mobility of HP1 in cell nuclei23 to determining the structure of whole insects.130 Additionally, L-SPIM can accommodate both air and water immersion media, though the latter requires a special sample chamber as discussed in Sec. II D.
Another two-objective configuration, V-SPIM, maintains objective orthogonality with a 45 orientation relative to the sample [Figs. 5(b)–5(d)]. Various V-SPIM orientations, such as inverted SPIM (iSPIM), open-top SPIM (otSPIM), and dual-illuminated iSPIM (diSPIM), are primarily suited for air or water immersion media. However, one thing to consider if choosing any of the V-SPIM configurations is the WD of the objectives which should be long enough to ensure the sample does not contact the lenses themselves.
iSPIM and iSPIM-like geometries are frequently reported two-objective orientations, often due to the prevalence of lattice LSM.22,51–53 It may be slightly misleading to those accustomed to traditional microscopy techniques, as the “inversion” in iSPIM refers not to the orientation of the objectives themselves, but rather to the capability of integrating the SPIM method onto an inverted microscope.131 To mount the two objectives onto the microscope, they are suspended above the sample [Fig. 5(b)].131–133 The overall geometry of iSPIM accommodates the use of both air and water as immersion media, with water being most common in super-resolution applications (see discussion in Sec. II C 1). The reduced distance between the objective and the sample can pose challenges, making iSPIM less ideal for super-resolution imaging of larger samples such as adult zebrafish or cleared tissue samples at least 1 cm in size. However, iSPIM inherently avoids issues like media mismatch or off-axis optical aberrations caused by conventional coverslip mounting, as the image is collected from above, eliminating the need to interact with the sample mount.
otSPIM facilitates easy sample loading and manipulation within the objective gap in an iSPIM setup. In otSPIM, the objectives are positioned below the sample mount, creating an “open-top” design [Fig. 5(c)] that is easily accessible for a multitude of samples and sample chambers.134,135 By situating the objectives at a 45 angle beneath the sample stage, they no longer interface directly with the sample. However, this tilted interface geometry presents challenges due to media mismatch. An additional refractive optic, such as a liquid-filled prism,136,137 solid immersion lens,138,139 or solid immersion meniscus lens,140,141 is required in otSPIM to mitigate off-axis optical aberrations caused by media mismatch. Despite the improvements introduced by the refractive optic, the sample mount itself can introduce astigmatism since the optical axis of the detection objective is tilted. To counter this astigmatism, a single cylindrical lens can be inserted into the imaging path before the detector.134,136 Most objectives selected for otSPIM applications are designed for use in either air or water due to the availability of more reasonable WDs in these immersion media. While it is theoretically feasible to use an oil objective when coupled with a custom oil-matching refractive index prism, this option should only be pursued if the WD is physically practical.
diSPIM effectively eliminates artifacts by isotropically exciting and collecting emission. Prior to this, the discussed objective geometries involved uncoupled illumination and detection paths. However, with side-on illumination from a single objective dedicated to sheet formation, shadow stripes may occur in images due to sample absorption of the excitation source.57,142,143 To mitigate these artifacts, a diSPIM configuration, akin to iSPIM or otSPIM but with both objectives serving as both illumination and detection [Fig. 5(d)], can be employed.144–147 Here, a final image is formed by alternating which objective forms the light-sheet and which one captures the image. When both objectives are identical, the system achieves isotropic resolution and diminishes artifacts caused by sample absorption or scattering.145,148 Given the objective geometry similarities with iSPIM and otSPIM, diSPIM setups encounter similar advantages and limitations. Unique to diSPIM, illuminating the sample from both sides can increase photobleaching and also requires an added level of complexity due to the need for dual illumination and detection paths.
Reflective LSM (RLSM) is a two-objective method that does not require the objectives to be orthogonal. Instead, the illumination objective is positioned nearly in-line with the detection objective, albeit with a slight offset [Fig. 5(e)]. Consequently, an additional reflective component, like a prism149,150 or mirror117,151 set at a 45 angle, is necessary after the illumination objective to direct the light-sheet orthogonally through the sample, hence the name reflective LSM. RLSM accommodates objectives designed for different immersion media within the same setup. With its predominantly vertical geometry, RLSM can be seamlessly integrated into a standard inverted microscope, enabling the use of conventional sample mounting techniques. In this configuration, the sample is positioned at the detection objective, which offers the option of employing objectives with higher NA, such as those designed for oil immersion.151–153 However, the incorporation of a small prism or mirror within the narrower WDs of these objectives introduces additional complexity.
One-objective configurations require supplementary optical components to guide the light-sheet through the sample while simultaneously enabling the objective to collect sample emission. Single-objective SPIM (soSPIM) resembles RLSM in using a micro-fabricated mirror98,118,119 or prism,154,155 set at a 45 angle to direct the light-sheet orthogonally through the sample [Fig. 5(f)]. soSPIM offers the advantage of requiring only a single objective, facilitating easier integration of the light-sheet into a conventional inverted microscope. However, only using one objective can be limiting since the additional reflective component requires careful sample mounting to ensure the sample is within the FOV of the objective while also allowing the light-sheet to reach the reflective optic before entering the sample. Custom sample holders with embedded reflective optics are commonly used to ensure orthogonal penetration of the light-sheet into the sample.156–158 While theoretically compatible with all immersion media, soSPIM may still encounter refractive index mismatches depending on the chosen sample holder.
Highly inclined and laminated optical sheet (HILO) microscopy, often implemented on total internal reflection fluorescence (TIRF) microscopes [Fig. 5(g)], utilizes a single-objective configuration. HILO employs an illumination beam that enters the objective just below the critical angle, generating an inclined sheet.159–161 This tilting of light is achieved with a higher NA objective in conjunction with a translation optic, typically already incorporated into a conventional TIRF microscope setup. The excitation beam is translated somewhere between the edge of the back focal plane of the objective (where TIRF occurs) and the center of the objective (where epifluorescence occurs).106,162,163 Its easy integration into existing TIRF microscope setups commonly found in optics labs renders HILO the most widely adopted light-sheet method in many core facilities.164 TIRF can theoretically be achieved with any of the previously discussed immersion media, making HILO also a viable option. However, water and oil are better suited to this geometry due to their higher inherent NAs. Since the sample is mounted above the objective without additional optical components limiting its placement, various mounting options are available. The primary limitation of the HILO method is the shallower penetration depth of the light-sheet, reaching up to approximately 10 m,165 compared to other LSM options compatible with samples ranging from 10 m to 1 mm thick.166 Additionally, since HILO requires the beam to be at an angle between the critical angle of TIRF and the 0 angle of epifluorescence, there is variability on the sheet thickness, which also results in the sheet no longer being aligned with the image plane of the objective.
D. Objective orientation determines the compatible sample size and mounting for imaging
LSM requires a level of creativity in how to introduce the sample to the imaging plane, considering both objective orientation and sample size. Minimizing optical aberrations due to media mismatch is also crucial. Figure 6 outlines commonly reported sample mounting techniques. Regardless of the chosen method, it is essential to plan how to acquire a 3D image. Typically, the sample is translated or rotated within the focal plane of the objective(s). Here, we explore a subset of mounting options, focusing on those not tied to specific experiments. Table II includes common immersion media and objective geometries paired with each technique.
Sample mounting . | Immersion media . | Objective geometry . |
---|---|---|
Capillary | Water | L-SPIM, RLSM |
Cuvette | Air, water | L-SPIM, RLSM |
Coverslip | Air, water, oil | iSPIM, otSPIM, diSPIM, RLSM, soSPIM, HILO |
Petri dish | Air, water, oil | iSPIM, otSPIM, diSPIM, RLSM, soSPIM, HILO |
Sample mounting . | Immersion media . | Objective geometry . |
---|---|---|
Capillary | Water | L-SPIM, RLSM |
Cuvette | Air, water | L-SPIM, RLSM |
Coverslip | Air, water, oil | iSPIM, otSPIM, diSPIM, RLSM, soSPIM, HILO |
Petri dish | Air, water, oil | iSPIM, otSPIM, diSPIM, RLSM, soSPIM, HILO |
Embedding techniques are employed for imaging larger samples like zebrafish, involving the placement of the sample within an optically transparent medium encased in a transparent holder [Fig. 6(a)]. In many LSM applications, water serves as the immersion medium, leading to the common use of agarose hydrogel to physically immobilize the sample. Agarose has a high water content ( %), resulting in a refractive index (1.335) similar to water (1.33),99,167 facilitating imaging through the hydrogel with minimal aberrations. Capillary tubes are often utilized as sample holders in this mounting technique, offering flexibility in their application. Initially, these tubes were used to shape the agarose into a cylindrical form, with a small plunger extruding the gel to align it within the focal plane of the objective.19,168,169 However, there is a delicate balance in agarose concentration (wt. %) to ensure structural integrity without adversely affecting the sample. Higher concentrations ( wt. %) may exert compression forces on or restrict the movement of living samples, potentially hindering studies focused on long-term dynamics such as embryo development.20,87 A modern alternative involves utilizing thin fluorinated ethylene propylene (FEP) capillary tubes instead of silica glass. FEP shares a refractive index of 1.338 with water and agarose, enabling direct sample imaging through the tube.170–172 With no need for hydrogel extrusion, lower agarose concentrations ( wt. %) can be used, effectively immobilizing the sample without impeding any dynamic studies.170 Additionally, to match the refractive index of water, capillary tubes are often suspended within a water-tight chamber, ranging from custom-made173 to commercially available.174
Four-sided cuvettes offer structural stability benefits akin to capillary tubes [Fig. 6(b)]. Typically, cuvettes are treated similarly to capillary tubes, employing a gelling agent to suspend the sample within the chamber. This method accommodates various sample sizes, from seaweed fragments175 to breast cancer cells.176 Alternatively, cuvettes can serve as pseudomedia baths for samples. Here, cells are cultured on a conventional coverslip, which is affixed to the bottom of the cuvette, followed by filling with the desired media.177 While cuvettes provide a straightforward commercially available mounting option, they have inherent drawbacks. Primarily, although optically clear, four-sided cuvettes are commonly made from materials with higher refractive indices than the sample. Most commercially available cuvettes are crafted from materials like quartz, borosilicate glass, or polystyrene, with refractive indices around 1.459, 1.519, and 1.55, respectively, whereas LSM-imaged samples typically have refractive indices ranging from approximately 1.33 to 1.47.170,178,179 Consequently, this refractive index mismatch leads to spherical aberrations.180 To address this, one might consider using objectives designed for higher refractive index media, such as oil. However, oil immersion objectives, as discussed in Sec. II C 1, have the shortest working distance among immersion media, typically in the range of a few hundred micrometers.95 Given that the thinnest wall of a cuvette is 1 mm, oil immersion objectives would not be suitable in this context.
The simplest mounting techniques, familiar to those knowledgeable of conventional microscopy, include glass coverslips [Fig. 6(c)] and Petri dishes [Fig. 6(d)]. Coverslips, while common, introduce complexity to sample mounting due to objective orientation. When deciding between one- and two-objective orientations, consideration of physical space for the sample is important, with more constraint typically in two-objective geometries (see Sec. II C 2). The physical limitations of a coverslip also impact the size of imageable samples. In standard inverted orientations, samples are imaged from below, requiring the light-sheet to pass through the coverslip, leading to having to select an appropriate coverslip thickness. Typically, a commercially available No. 1.5 coverslip with a thickness range of 160–190 m is suitable. Opting for an objective geometry that images the sample from above affords more flexibility in sample preparation and immersion media. Since the coverslip is not in the imaging path, refractive index mismatches due to the substrate are less likely, and any glass thickness can be chosen. In such cases, oil immersion is not viable, while both water and air are. When using water, maintaining the objective and sample in the media throughout imaging is necessary, often facilitated by a media bath component, ranging from custom fabrication51 to a commercially available Petri dish.23
Petri dishes have been used in conventional optical microscopy to facilitate the imaging of biological samples under controlled environments. These dishes accommodate various requirements, from cell culturing to serving as a media bath or a combination of both functions.23,25 Similar to glass coverslips, Petri dishes offer flexibility in imaging orientations, supporting sample observation from either above or below. When imaging from below, it is optimal to use glass-bottom dishes equipped with a No. 1.5 coverslip window to match an oil refractive index for improved imaging quality. For objectives that image from above, standard plastic Petri dishes are sufficient. However, it is crucial to ensure that the dish dimensions align with the physical constraints of the objectives and samples. The dish diameter should accommodate the objective(s) without making contact, while the dish depth must accommodate the sample and immersion media. Fortunately, there is a wide range of commercially available Petri dish dimensions that fit the requirements of specific LSM objectives and samples.
E. Cameras are chosen based on the desired sensitivity and timescale of the measurements
Two-dimensional scientific cameras are used to detect the collected emission from the sample. Electron multiplying charge-coupled device (EMCCD) or scientific complementary metal–oxide–semiconductor (sCMOS) cameras are the two options for nanoscale imaging due to their high (up to 95%) quantum efficiencies. Cameras are selected based on the noise, sensitivity, and frame rate capabilities.
EMCCD cameras find extensive application in various low-light imaging scenarios, spanning from biological samples92,181 to single-molecule measurements182,183 to astronomy,184,185 where high sensitivity is crucial for accurate imaging. Here, the definition of “sensitivity” is the ability to distinguish the signal from the noise of the measurement.186 The inherent high sensitivity of EMCCD cameras stems from an on-chip multiplication gain mechanism that enables the detection of single-photon level intensity. However, this gain mechanism introduces additional noise, potentially lowering the overall signal-to-noise ratio (SNR) and image quality.
The frame rates achievable for EMCCD cameras are limited due to a number of factors. First, the on-chip multiplication process restricts the speed at which the camera can read and clear the signal.192 Additionally, EMCCDs typically use a “global shutter” exposure. Each pixel will simultaneously collect charge and then transfer that charge to the readout nodes. The overall signal processing in EMCCDs is pixel by pixel, further slowing the image acquisition. However, since this exposure mode results in a single “snap-shot” in time for each frame, there is very little if any spatial distortion. Moreover, it is possible to temporally correlate between different regions of image, which may be required for some super-resolution techniques such as higher-order cross correlation. While imaging the full-chip, these cameras typically operate at 61 fps, but can reach speeds as high as 4000 fps with a smaller FOV as well as using pixel binning.193 Accordingly, EMCCD cameras are well-suited for imaging static or slow-moving samples with low-emission, where resolution below 130 nm is not essential.
sCMOS cameras, a newer technology compared to their EMCCD counterparts, are increasingly employed in biological imaging194,195 and dynamic studies.196,197 While they are less photosensitive than EMCCDs due to the absence of a gain mechanism, sCMOS cameras exhibit lower overall read noise. However, sCMOS cameras do suffer from pixel-dependent noise, which is difficult to correct.198 The primary advantages of sCMOS cameras lie in their higher inherent resolution and fast achievable frame rates. The enhanced resolution is attributed to their smaller average pixel size, approximately 5 m. Referring back to Eq. (17), and maintaining the same assumptions, a resolution of 100 nm can be easily attained with a 50 magnification. Considering typical magnifications of 40–100 are used in many microscopy setups, this is not an unreasonable magnification to achieve without additional optics following the objective.
The faster frame rates achievable with sCMOS cameras are primarily due to both the exposure mode and the clearing mode. The typical exposure mode used in sCMOSs is a “rolling shutter.” The “rolling shutter” mode has an initial parallelized exposure phase in which each column is activated in sequence, originating in the middle of the sensor and progressing toward the edges. After the initial phase, all activated columns are exposed simultaneously. Finally, the camera reaches the readout phase. Unlike EMCCDs, which clear serially, sCMOS cameras clear in the same manner as the initial exposure phase—in parallel—enabling full-chip frame rates exceeding 100 fps.199 However, the temporal nature of the “rolling shutter” introduces some disadvantages. Since the columns are not exposed simultaneously throughout the entire exposure window, there is a greater chance for spatial distortion as well as less reliable temporal correlation between the center and edges in dynamic samples. The distortion is more apparent when objects larger than a few pixels and are moving faster than the frame rate. However, this drawback is less concerning when imaging smaller objects with a faster frame rate that allows for oversampling. These cameras often come equipped with various settings optimized for SNR, speed, or dynamic range. By imaging a smaller FOV, frame rates of up to 10 000 fps can be reached using a speed setting.200 However, this increased speed comes at the expense of sensitivity. The smaller regions that can reliably be correlated in time limit this exposure mode to techniques such as lower-order cross correlation. Consequently, sCMOS cameras are best suited for imaging samples with high emission efficiencies, relatively fast movement typical of dynamic studies, and resolution requirements below 130 nm.
III. ALIGNMENT AND CALIBRATION PROCEDURES FOR LSM
A. Component selection for the home-built LSM: Optics, sample, and acquisition
The upcoming sections on microscope alignment and calibration are intricately linked to the specific LSM system in use. To provide a practical example, we will first outline the setup of our home-built system (Fig. 7, and Fig. S2 and Table S1 in the supplementary material). As previously mentioned in Sec. I, our microscope aims to capture features beyond the cell, particularly monitoring protein diffusion within the ECM.201–204 Consequently, considerations for resolving nanoscale features and dynamics of 10 m s diffusion in situ dictate our choices regarding beam profile, immersion media, objective orientation, sample mounting, and camera selection.
For excitation in our LSM, we opt for a simple Gaussian beam profile for its ease of implementation. Since we focus on nanoscale imaging rather than deep tissue penetration, complex beam profiles like Bessel or Airy are unnecessary, as we will not be working with larger mm-scale tissue samples. Our sheet formation involves a combination of cylindrical lenses (Fig. 7, C1–3) and a scanning galvanometer mirror system (Fig. 7, GM). A galvanometer mirror consists of a lightweight mirror attached to a coil within a magnetic field. By varying the current through the coil, the mirror can rapidly and precisely scan the laser beam across a surface in one dimension per mirror. This hybrid approach is meant to mitigate shadow formation issues inherent in Gaussian beams. In the context of scattering illumination, one could reduce the shadowing and striping artifact by moving the light-sheet.205 With the millisecond time scale of protein diffusion, we can reduce the striping and shadowing artifact by adjusting the oscillation rate of the galvanometer mirror to match the camera exposure time, which is 2 ms in this Tutorial. Using Eqs. (9) and (10) from Sec. II B 2, we calculate theoretical beam dimensions based on the lenses and beam path of our microscope, resulting in dimensions of m. Later, in Sec. III D 4, we will describe our method for determining the experimental beam dimensions and compare them to these theoretical values.
We opt for water as our immersion media due to its refractive index compatibility with most biological environments and their mimics. This choice allows flexibility in objective geometry, as discussed in Sec. II C 2. Considering our intended samples, such as proteins diffusing within an ECM-like matrix, we employ identical 40 water dipping objectives (refer to Table S1 in the supplementary material) oriented in an iSPIM configuration. However, the limited space between objectives in the iSPIM orientation requires careful sample mounting. For the objectives we chose, the WD is considered relatively long at 3.3 mm. This WD does not afford a very large objective gap ( mm at the base), resulting in the footprint of the sample having to be smaller. Additionally, as depicted in Fig. 5(b), the objective gap is triangular. Therefore, we designed a method (detailed in Methods in the supplementary material) to create a mm high domed sample on a 10 mm diameter glass coverslip, fitting within the narrow objective gap while still reaching the focal plane. The sample is immobilized by covalently bonding the ECM-like matrix to the glass (Methods in SI) to minimize sample drift and blurring during imaging and then mounted on a custom arm within a media bath (Fig. 7 inset and CAD files S1 and S2 in the supplementary material).
To acquire 3D datasets, we move the sample through the light-sheet while collecting emission intensity. Sample movement is achieved by mounting the media bath to a xy-translation stage (Fig. 7, TS) and mounting the sample arm onto a piezostage for vertical translation. With an NA of 0.8 and an emission wavelength of 575 nm, we anticipate sub-micrometer axial resolution (360 nm) and micrometer lateral resolution (1.8 m) [Eqs. (13) and (14)]. Given our focus on fast dynamics, we have chosen a sCMOS camera over an EMCCD for its higher frame rate capabilities. Additionally, the higher resolution of the sCMOS suits imaging nanoscale structures with our correlation-based super-resolution technique, discussed later in Sec. IV A 2, since the sampling is increased.204 However, if using another super-resolution technique that is more sensitive to the SNR, such as localization, one should consider choosing an EMCCD instead. For further specific component descriptions and dimensions, refer to Table S1 in the supplementary material and CAD files S1 and S2 in the supplementary material.
B. Standards for alignment should be detectable by eye
Aligning a microscope does not require a specific standard until the detection path. It is recommended to start with a highly fluorescent bulk bead or dye solution, typically of the order of M for dyes and lower for beads, allowing the beam to be visible to the naked eye (Fig. S3 in the supplementary material), aiding in troubleshooting. For our standard, we utilized a 2.0 m bead solution diluted to 1:10 of the stock which resulted in a pM concentration (Methods in the supplementary material). Moreover, using a bulk sample eliminates the possibility of imaging molecular diffusion. Once the sample emission reaches the camera, a less concentrated solution should be used. This lower concentration offers a clearer understanding of alignment by revealing any aberrations of individual emitters in the camera. To counter potential blurring from diffusion, suspending the particles within a gel with a refractive index similar to that of the solvent can be helpful.
C. Alignment is an iterative process
The alignment process may appear daunting initially, but following a systematic approach can simplify it considerably. In this section, we will walk through our LSM alignment process and provide useful tips for other setups.
To begin alignment, map out the optical mount locations on the breadboard. The spacing between lenses, determined by their functions and focal lengths, should be calculated beforehand (Sec. II B 2). Since this spacing may not align with the inherent spacing of the breadboard, incorporating rails or movable post mounts can be advantageous. Additionally, decide whether optics will be mounted horizontally or vertically. For our LSM, we opted for vertical mounting to accommodate the iSPIM orientation of the objectives, which was achieved using a custom “U”-shaped breadboard (Base Lab Tools, Fig. S2 in the supplementary material).
After mapping the path, attach the optical mounts onto the breadboard. Ensure consistency in the center of each mount component to aid lens alignment later. Although components of the same shape and size are helpful, this is not always feasible, so adjustments may be needed for those with different dimensions. Using post mounts with adjustable heights (e.g., ThorLabs, PH2) and a non-reflective ruler (e.g., ThorLabs, BHM3) can be beneficial in ensuring each component is centered at the same height. However, for vertical mounting, there is potential for misalignment due to gravity and the post is only secured by a single side screw within the mount. The impact of gravity can be mitigated by using posts that have been machined to the desired height and secured directly to the breadboard. During mounting, ensure all mirror mounts are set to 45 , while lens mounts are either at 90 or 180 relative to each other for easy beam path tracing. While some LSM geometries may require deviations, maintaining this rule for most components simplifies alignment with irises.
Once optical mounts are in place, ensure the laser passes through the center of each mount to facilitate overall microscope alignment and prevent edge aberrations. Start with the laser at its lowest power setting, using fluorescent targets (e.g., ThorLabs, VRC2RMS) along the optical path for beam location identification. If this value is greater than mW, neutral density (ND) filters should be used to decrease the overall laser intensity. Protective eye-wear (e.g., ThorLabs, LG12) should also be worn during this process.206 In this first alignment iteration, a multi-iris system with optical post collars (e.g., ThorLabs, R2) to fix the iris height can be used to ensure the laser beam is centered on each mounting component. Mirrors should be used to direct the beam through each iris until it reaches the objective.
In subsequent alignment iterations, we introduce optics into the path gradually to avoid deviations caused by a slight tilt in the optic. If all the lenses are introduced at once, isolating the exact optic(s) causing the deviation will be difficult. In an SPIM setup, a two-iris system can aid alignment until the sheet formation stage at the cylindrical lenses. The two-iris system can generally be used throughout the illumination path of a DSLM since the beam remains circular. For aligning sections with asymmetric beam shapes, a rail system may be necessary. If the beam is aligned prior to entering the first lens, a rail system should allow the beam to maintain its trajectory through the other lenses, which is advantageous when the lens placements do not afford enough room for an iris to be mounted. After the cylindrical lenses, the alignment of the beam can be determined using the fluorescent targets. Using the previous mirror within the optical path, it is possible to direct the beam to the center of the next mirror. This method was used from the cylindrical lenses (Fig. 7, C1–3) to the f lens (Fig. 7, SL) and then again from the f to the illumination tube lens (Fig. 7, T1). The illumination tube lens and illumination objective (Fig. 7, IO) should be in line with one another. Alignment typically requires multiple attempts and refinements, but once everything seems correct, confirm the beam exits the objective at a 45 angle relative to the sample stage using the highly fluorescent alignment standard (Sec. III B).
The detection path alignment, theoretically simpler than illumination, involves centering the emission collected by the detection objective (Fig. 7, DO) on the detection tube lens (Fig. 7, T2). Use a mirror to direct emission light through a relay lens (Fig. 7, S3), ensuring no warping of the image with the aid of an iris. After the relay lens is a mirror used to direct the light into the camera. Once emission reaches the camera, fine-tune the sheet orientation using a lower concentration of the alignment standard solution, where individual particles can be seen, to better diagnose any aberrations caused by slight misalignments.
D. Calibrations for nanoscale imaging require single particle and molecule samples
Once the microscope is well aligned, calibration is essential to begin data collection. Understanding the FOV based on the beam shape and size, along with the precise pixel size of the setup, is crucial for orienting within the sample. Additionally, it is important to acknowledge that there may be image skewing inherent to 3D imaging. Before reconstructing the structure of the sample, identifying and accounting for this offset is necessary.
1. Standards for calibration should be highly emissive beads of known size
Our standard calibration sample differs from the alignment sample to enable visualization of individual fixed particles for accurate determination of the microscope pixel size and inherent axial skewing in LSM. To accomplish this, we suspended beads in a 2 wt. % agarose hydrogel. Agarose is well-suited for our water dipping objectives due to its refractive index matching, as discussed in Sec. II C 2.99,167 The bead sizes range from 2 to 0.25 m, facilitating straightforward determination of pixel size. We selected a bead concentration that ensures sufficient density for statistical analysis of pixel size yet avoids overcrowding that would hinder the distinction between individual beads.
2. Pixel size can be measured using beads
To determine the pixel size of our microscope setup, we cannot use conventional methods like a USAF target due to our “non-conventional” geometry.207,208 The flat glass USAF targets are beyond the 3.3 mm WD of the two objectives in our iSPIM setup, and we are unaware of any 3D AF or NIST imaging target compatible with “non-conventional” geometry LSM. Therefore, we have determined the pixel size by measuring the PSF of highly fluorescent beads of varying sizes, specifically 2, 1, and 0.25 m beads suspended in a 2 wt. % agarose hydrogel (actual sizes reported by the manufacturer are listed in Table III). For this method, we first assume the is ideal based on Eq. (17), where is 6.5 m and is 40 , resulting in a of 0.163 m. Using this image pixel size, we estimate the approximate PSF for each selected bead size. Once we estimate the number of pixels that each bead size should encompass, we measure the actual pixel size using our localization software and extract the average pixel size of each bead sample, as detailed in Table III.209 The expected PSF values are statistically within the range of the experimental average PSF values, confirming our alignment in terms of magnification. To determine the actual size of our image pixel for our microscope, we divide the bead size by the average PSF (Table III). The average image pixel size across all bead sizes is 0.18 0.04 m, which is larger than the Nyquist criterion due to using a 40 magnification rather than a 60 .
Bead sizea . | Expected PSFb . | Experimental PSFc . | Pixel sized . | Overall average pixel sizee . |
---|---|---|---|---|
2.10 ± 0.09 | 12.9 ± 0.6 | 11 ± 2 (3461) | 0.18 ± 0.02 | 0.18 ± 0.04 (3) |
0.950 ± 0.024 | 5.83 ± 0.15 | 5.4 ± 0.8 (2140) | 0.18 ± 0.03 | |
0.25 ± 0.05 | 1.5 ± 0.3 | 1.4 ± 0.1 (2439) | 0.18 ± 0.07 |
Bead sizea . | Expected PSFb . | Experimental PSFc . | Pixel sized . | Overall average pixel sizee . |
---|---|---|---|---|
2.10 ± 0.09 | 12.9 ± 0.6 | 11 ± 2 (3461) | 0.18 ± 0.02 | 0.18 ± 0.04 (3) |
0.950 ± 0.024 | 5.83 ± 0.15 | 5.4 ± 0.8 (2140) | 0.18 ± 0.03 | |
0.25 ± 0.05 | 1.5 ± 0.3 | 1.4 ± 0.1 (2439) | 0.18 ± 0.07 |
3. Axial offsets must be quantified for correct 3D reconstruction
In our iSPIM setup, the excitation and detection paths intersect at right angles, offset from the sample itself, resulting in a parallelogram-shaped sample region [Fig. 8(a)]. Consequently, when imaging in 3D, we perform three separate calibrations to correct for the skewed data in the z, y, and x axes [Figs. 8(b)–8(d)].
As the sample moves through the light-sheet, the same region is detected multiple times in the FOV, irrespective of its orientation relative to the detection objective. This redundancy is accounted for during 3D image reconstruction. Initially, we focus on calibrating the z-offset, which is the most common type of skewing encountered. This is achieved by moving a 2 m bead calibration standard along the z axis in 5 m steps using a piezostage [Fig. 8(b)]. However, experimental analysis (Fig. S4a in the supplementary material) reveals a discrepancy between the expected ( m) and observed (2.2 0.1 m) shifts, indicating that the angle of the light-sheet relative to the sample differs from the assumed 45 (calculations in the supplementary material).
Although the light-sheet angle minimally impacts z- and x-translations, it significantly affects y-translation and the orientation of the captured image on the camera. Through trigonometric calculations, we determine that the actual angle of the light-sheet relative to the sample is 66 , not 45 as initially assumed. The discrepancy between the ideal angle and the actual angle of the light-sheet is due to the sheet entering the back pupil of the objective slightly higher than the optical axis ( mm). However, the off-axis effect can be partially mitigated by incorporating scanning into the image acquisition. Consequently, when not scanning, the image appears tilted on the camera (Fig. S5 in the supplementary material), with shallower emitters appearing closer to the detection objective than deeper ones.
Next, we address the skew along the y axis, which requires a different correction approach due to geometric considerations [Fig. 8(c)]. Using the previously determined angles, we calculate the expected shift along the y axis (calculations in SI in the supplementary material). Trigonometric calculations yield an expected average shift of approximately 4 m, consistent with the experimental observation of 3.8 0.2 m (Fig. S4b in the supplementary material).
Finally, we correct for image skew along the x axis, which is straightforward since it is parallel to the image plane [Fig. 8(d)]. Here, we ensure that the physical shift aligns with the pixel shift. Experimental results confirm that moving the sample 5 m along the x axis corresponds to an average pixel shift of 5.3 0.2 m (Fig. S4c in the supplementary material), as expected. The quantification of the entire imaging area can then be applied to reconstruct the image. In addition, physical shifts in the excitation volume relative to the sample (i.e., drift) can occur during measurements. Inclusion of fiducial markers such as fluorescent beads,210 luminescent nanoparticles,211 or nanofabricated patterns212 in the experimental sample can quantify drift. Image registration of the marker locations in post-processing can then be applied to accurately reconstruct the three-dimensional image.213 Alternatively, active stabilization to reduce drift during measurements can be achieved with objective nanopositioner or piezostage hardware and programmed feedback loops.214,215
4. Sheet thickness can be measured with a beam profiling camera
Originally, our aim was to determine the sheet thickness by tracking the duration a bead remains in focus as we move the sample along the z axis through the sheet. However, this approach proved impractical due to the geometrical factors discussed in Sec. III D 3, where the initial position of a feature in the fluorescent sample within the sheet affects its observability, rather than the sheet thickness itself. Another indirect measurement option is programming a piezostage to account for the angle of the sheet as the sample is moved along the z axis, which requires prior knowledge of the axial offset of the system (Sec. III D 3). In addition, this method is complex and relatively expensive. Direct measuring options for determining the sheet thickness at the sample plane include rotating the cylindrical lenses 90 or mounting a mirror 45 relative to the objectives. Rotating the optics requires the use of a highly fluorescent sample, such as those discussed in Sec. III B, resulting in imaging the side-profile of the sheet. However, the PSF of the microscope will broaden the measurement and require deconvolution. This method also requires that the optical path be realigned each time the sheet is rotated, introducing the possibility of misalignment. Mounting a mirror will result in the light-sheet appearing as a line on the camera. Here, it is possible to measure the thickness, of a given position, by measuring the FWHM of the line so long as any emission filters within the detection path are removed to allow the laser to reach the sensor. To limit the intensity and avoid saturating the camera, the laser should be set to the lowest power and ND filters should be added to the illumination path during measurement. Additionally, the mirror must be small enough to fit within the objective gap. For our LSM, this would require the diameter of the mirror to be mm, the widest portion of the objective gap. Consequently, we opted to physically measure the beam using a beam profiling camera capable of resolving dimensions down to 20 m (e.g., Thorlabs, BC207VIS).
The final measured sheet dimensions (7.1 0.4 2.15 0.03 m ) closely approach, but were slightly overestimated by, the theoretical values calculated in Sec. III A (11.23 4.24 m ). However, the sheet dimensions we report here are not completely accurate due to the beam under-filling the back pupil of the objective lens ( mm ). In order to take full advantage of the higher NA, the back aperture must be slightly over-filled as the inherent nature of a Gaussian beam dictates only the 1/e section of the beam should enter the objective. If the aperture is under-filled, the effective NA of the objective is smaller. Following Eq. (16), a decrease in the NA will also decrease the resolution, with the effect primarily observed in the axial resolution of the system.
IV. DATA COLLECTION AND IMAGE RECONSTRUCTION METHODS FOR LSM
When acquiring data at different depths, the end goal of the imaging determines the axial size of the steps between each dataset, known as “z-slices,” and the acquisition time. These steps should be sufficiently small for continuous imaging, tailored to the size of the feature of interest. Typically, smaller step sizes (e.g., 10–100 nm) are ideal. However, reconstructing samples with larger features (e.g., 1–1000 m) using fine steps could result in excessively large (e.g., 1 TB) data files. Thus, there must be a balance between the number of steps required for accurate reconstruction and managing file size, which varies depending on the imaging method and information sought.
After data collection, the data must be reconstructed and analyzed. For our LSM data, we have developed a 3D image reconstruction code, available on GitHub and accompanied by a user guide.216 Here, we outline the steps for reconstructing spatial information from a 2 m bead calibration sample (described in Sec. III D 1) as well as spatiotemporal information collected from 155 kDa dextran diffusing within a 2 wt. % agarose gel.
A. Data collected on the LSM must be converted and corrected
The following discussion will begin with the data workup procedure for static samples. In general, the method for static data analysis is simpler than that of the method for dynamic data. Therefore, it provides a good introduction to the overall analysis process. A flow chart of our data acquisition and analysis steps, as well as representative images of raw data and data prior to deconvolution, are included in the SI (Figs. S6 and S7 in the supplementary material).
1. Static data requires de-skewing and de-blurring corrections of the raw images
In our setup, Micro-Manager is used for data collection, saving raw data files in .tiff format that are subsequently imported into the reconstruction code.217,218 Users have the flexibility to import individual z-slices or a complete sequence comprising all collected z-slices. When importing slices individually, they are consolidated into a single sequence and then converted into a unified .mat file format. Before initiating the code, users input specific frames desired and define the region of interest (ROI).
Upon converting the data to .mat format, we address skew, as discussed in Sec. III D 3, where each subsequent z-slice is offset based on the piezostage position and our calibration data. Our code corrects for skew by shifting each z-slice relative to the location of the first slice in the image sequence. This correction involves translating the image matrix along both x and y axes according to the pixel shift measured for a 5 m z-step, scaled to account for the actual z-step size between images.
3D deconvolution improves resolution by de-blurring the contribution of the LSM PSF using either an instant deconvolution or an iterative approach. Instant deconvolution, also known as inverse filtering, applies a deconvolution algorithm (e.g., Wiener deconvolution) to the image in order to back-calculate to the de-blurred image. This method yields relatively fast results and is more suitable for real-time applications but may also be less accurate with poorly characterized PSFs.219,220 In contrast, iterative deconvolution assumes the observed image is a convolution of the PSF and real image. Here, the algorithm (e.g., Richardson–Lucy) estimates the de-blurred image, blurs it using the PSF, and then compares the result to the experimentally obtained image. The process is repeated, with an adjusted estimate each iteration, until convergence is reached.221,222 While iterative methods offer higher accuracy for features that are not point-like, they can be computationally intensive, particularly for large or complex images.223,224 In an attempt to alleviate the computational burden, iterative approaches are generally performed within the frequency domain as it is less computationally intensive.225
We adapted an iterative code written originally for depth-resolved holographic reconstruction, leveraging its ability to restore continuous, non-bead-like features.226 The PSF is defined based on the diffraction-limited 2D PSF of the microscope, determined in Sec. III D 2, and extended to a 3D PSF. Using the MATLAB function “fspecial3,” we generate a 3D PSF based on the 2D PSF and matching the object data file dimensions and z-range. The relationship between the 3D PSF and the unknown 3D object guides the iterative loop, with each subsequent iteration approaching closer toward reconstructing the true 3D object dimensions.
Given the sheet thickness (2.15 m) exceeds the DOF (1.44 m), some background intensity remains. To enhance the image quality, we background correct via thresholding. Our thresholding process, based on sparse emitters, defines an ROI and calculates local background and standard deviation intensities.227 The code iteratively analyzes the entire image and generates a threshold map by adding three standard deviations to the average background intensity. The corrected frame is generated by subtracting the threshold map from the original frame. This process is performed for each frame in the sequence, ensuring intensity correction for optimal 3D structure reconstruction.
2. Dynamic data can be analyzed by correlation to resolve diffusion and structure in 3D
The dynamic dataset will undergo a similar analysis procedure as the static dataset, with the primary distinction being that instead of containing a single image at each z-step, it now consists of a movie. Prior to de-skewing and deconvolution, the dataset will be transformed into a single image per movie, accomplished through fluorescence correlation spectroscopy super-resolution optical fluctuation imaging (fcsSOFI) analysis.201–204 fcsSOFI allows for simultaneous quantification of molecule diffusion speeds in heterogeneous media while recovering the matrix structure. Current temporal and structural resolutions of fcsSOFI are approximately 1 m s and 100 nm, respectively.203
B. Intensity corrected z-slices can be stacked to recreate the 3D structure
The final corrected images are stacked to reconstruct the 3D volumetric image. This process is accomplished in our code using the function “vol3d.”228 Briefly, the “vol3d” function creates a volume render from the inputted 3D z-slice sequence. The resulting 3D image (Fig. 10) is scaled in the xy-plane based on a factor of the number of pixels the intensities cover and the value of . The z axis scaling is based on a factor of the number of slices and the physical step between them.
Starting with the reconstruction of the static 2 m bead data [Fig. 10(a)], we see a heterogeneous distribution of the beads within the hydrogel environment. Performing line section analysis on the beads in view (Fig. S8 in the supplementary material) reveals the average diameter is 1.6 0.5 m (x axis) and 1.7 0.7 m (y axis), slightly less than the manufacturer reported size (2.10 0.09 m, Table III), but still within variation. The lower average diameter is most likely caused by the deconvolution step, which is known to over-correct the image. Additionally, there is variability in the bead locations within the light-sheet with beads ranging from partially sectioned to fully illuminated by the sheet. This effect is demonstrated in Fig. 10(a) through the variability in saturation where beads that are partially in the light-sheet are less intense than those that are fully within.
Next, we examine the validity of using our home-built LSM to measure spatiotemporal information from 155 kDa dextran diffusing within a 2 wt. % agarose gel. Here, we rely on the complex porous structure of agarose for our nanoscale imaging. These hydrogels are known to have a wide range of structural features including isolated cavities and interconnected channels with an average size of approximately 150 nm.229,230 When allowed to freely diffuse in water, the diffusion coefficient of 155 kDa dextran should be around 24 m s given the molecule has a hydrodynamic radius of 9 nm.231 Introducing a porous structure should induce a confinement effect, reducing the overall diffusion coefficient of the molecule. Previously, we have shown 2 wt. % agarose slows the diffusion of 155 kDa dextran to 2.8 0.4 m s . However, we noted this speed is slower than expected and suspect interactions at the glass interface to be the cause.203 Here, the average diffusion coefficient of our 3D system is 6.4 0.9 m s [Fig. 10(b)], faster than that of our previously reported TIRF results due to the lack of interactions at an interface. We have also previously reported the pore structure of the agarose gel, albeit limited to the glass interface, using our fcsSOFI technique.203 Here, we are able to recover not only isolated cavities but also interconnected channels [Fig. 10(b)]. Performing line section analysis, we find the average cross section is 340 50 nm (Fig. S9 in the supplementary material).
V. CONCLUSION
This paper provides an accessible practical guide to LSM for scientists new to this technique. We emphasize that LSM is suitable not only for imaging static microscale features but also nanoscale features and diffusion dynamics. We cover fundamental aspects of microscope conception, beam profiling, and guide readers through the entire process, concluding with image reconstruction. In Sec. II, we discuss essential practical considerations for designing a home-built LSM system. This includes decisions on beam profiling, static or dynamic sheet formation, objectives, sample mounting, and camera selection. We highlight the importance of objective orientation, as this choice impacts the overall design and adaptability of LSM for various samples and imaging conditions. Following component selection, Sec. III outlines our home-built setup to illustrate the alignment and calibration process. While alignment procedures are similar to other optical microscopes, LSM calibration differs due to its 3D imaging capability and axial skewing during sample movement within the light-sheet. Correcting this offset is crucial for accurate 3D image reconstruction. Given the custom nature of our system, we detail the conditions and steps required for constructing a continuous 3D image using our home-built software in Sec. IV. Each LSM system is unique, presenting challenges in design and construction tailored to specific research needs. We hope this guide serves as a valuable resource, offering clarity and guidance in the complex field of light-sheet microscopy.
SUPPLEMENTARY MATERIAL
Methods including sample preparations and imaging conditions, skew correction calculations, schematic of a typical infinity-corrected objective setup (S1), photograph of our LSM setup (S2), photograph of a highly fluorescent bulk bead standard solution (S3), time-lapse line sections of 2 μm bead (S4), image of 2 μm bead to demonstrate the tilt of the light-sheet (S5), flow chart of data acquisition and analysis (S6), example 3D reconstructed images (S7), line sections of a 2 μm bead after 3D reconstruction (S8), cross section of an agarose pore (S9), SNR vs SBR comparison (S10), graphical representation of the light-sheet geometry (S11), all components used for our LSM setup (Table S1 in the supplementary material), and legends for CAD Files S1–S7 (PDF). CAD S1: Bath Drawing (PDF); CAD S2: Microscope Sample Holder Assembly (PDF); CAD S3: Microscope Stage Mount Knob (STL); CAD S4: Microscope Stage Mount Flat Pattern of Slide Holder (DXF); CAD S5: Microscope Stage Mount Riser (STL); CAD S6: Microscope Stage Mount Base (DXF); CAD S7: Laboratory Jack to Stepper Stage Adapter (PDF).
ACKNOWLEDGMENTS
We acknowledge NIH NIGMS (Grant No. R35GM142466) and the Paul G. Allen Family Foundation’s Allen Distinguished Investigators Award for financial support of this work. The authors also acknowledge the Kisley research group, the Jenkins research group, and Matthew Kramer for helpful discussions.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Stephanie N. Kramer: Conceptualization (equal); Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (lead); Software (lead); Writing – original draft (lead); Writing – review & editing (equal). Jeanpun Antarasen: Formal analysis (supporting); Software (supporting); Writing – original draft (supporting); Writing – review & editing (equal). Cole R. Reinholt: Conceptualization (supporting); Visualization (lead); Writing – review & editing (equal). Lydia Kisley: Conceptualization (equal); Funding acquisition (lead); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.