You can’t focus a beam of light to a spot smaller than half its wavelength. That so-called diffraction limit, derived in the 19th century by Ernst Abbe, was long presumed to restrict the resolution of an optical microscope to no better than about 200 nm. For biologists, it seemed to imply that although optical microscopy could reveal the contours of a cell, proteins and other subcellular structures would always remain a blur.

Abbe’s diffraction limit still holds. But in a string of experimental and theoretical work spanning more than a decade, Stefan Hell, W. E. Moerner, and Eric Betzig demonstrated that the limit could be skirted to image specimens with near-molecular resolution. For developing fluorescence techniques that brought the nanoscale world into focus, the three researchers will be awarded equal shares of the 2014 Nobel Prize in Chemistry.

Betzig, Hell, Moerner

For the same reason that a beam of light can’t be focused to an infinitesimal point, light emitted or scattered from a point appears in optical images as a diffuse blob of light. Like a focused beam, the blob—known formally as a point-spread function—has a diameter that’s governed by the diffraction limit. Therein lies the trouble with imaging tiny objects: Any two features separated by less than a half-wavelength of the light used to image them will appear as a single connected blob.

In 1990, when Hell was finishing up his doctoral thesis at Heidelberg University in Germany, breaking that diffraction barrier seemed to him the only interesting problem left in optics. One known path around the barrier was through near-field microscopy, in which a sample is imaged with the evanescent waves from an optical fiber tip. (See the article by Lukas Novotny, Physics Today, July 2011, page 47.) Because Abbe’s limit doesn’t apply in the near field, the size of the light spot is limited only by the size of the tip.

Near-field microscopy, however, works only for imaging surfaces. Hell set his sights on the trickier and potentially more transformative feat of beating the diffraction limit in the far field. He soon became convinced that if it could be done, it would be with fluorescence microscopy.

In fluorescence microscopy, a specimen is decorated with fluorophores that bind preferentially to specific proteins or other structures of interest. Illuminated with a resonant laser, the fluorophores absorb photons and reemit them at longer wavelengths. By collecting that redshifted light, one can generate images with vivid chemical contrast.

On its face, the technique runs into the same problems as conventional microscopy. To faithfully image a structure using fluorescence, the spacing between the fluorophores that decorate the structure must be smaller than the smallest features one wants to see. Because the laser that excites the fluorophores can’t be focused to a spot smaller than the diffraction limit, fluorophores distributed densely enough to image nanostructures will necessarily have overlapping point-spread functions. As Hell saw it, however, the advantage of fluorescence was that the light emission could be controlled by manipulating fluorophores’ molecular states. The trick would be to find a way to ensure that not all the fluorophores in a diffraction-limited spot glow at once.

Hell came across a solution in a quantum optics text, in a section on stimulated emission. Normally, a fluorophore lingers in its excited state for a nanosecond or two before emitting a photon and decaying to the ground state. But, as the text explained, an incoming photon can stimulate the fluorophore to decay prematurely. Recalls Hell, “I read that and thought, ‘Why have people only been using light to pop molecules up in energy? Why not pop them down?’”

He devised a scheme to apply two successive pulses—an excitation pulse that turns fluorophores on and a longer-wavelength, stimulated emission pulse that quenches them. By configuring the two pulses’ focal spots to partially overlap—as shown in figure 1, for instance—one can restrict fluorescence to a spot much smaller than the diffraction limit. In an image produced by scanning the two lasers across the sample, that spot sets the pixel size.

Figure 1. In stimulated emission depletion microscopy, one laser excites fluorophores and a second—here, a donut-shaped stimulated emission beam—quenches them. By configuring the two beams to overlap concentrically, fluorescence can be restricted to an effective excitation spot much smaller than the diffraction limit. (Adapted from K. I. Willig et al., Nature440, 935, 2006, doi:10.1038/nature04592.)

Figure 1. In stimulated emission depletion microscopy, one laser excites fluorophores and a second—here, a donut-shaped stimulated emission beam—quenches them. By configuring the two beams to overlap concentrically, fluorescence can be restricted to an effective excitation spot much smaller than the diffraction limit. (Adapted from K. I. Willig et al., Nature440, 935, 2006, doi:10.1038/nature04592.)

Close modal

Hell and collaborator Jan Wichmann described the idea in a 1994 paper and dubbed the technique stimulated emission depletion (STED) microscopy.1 But with neither a lab in which to build the microscope nor the funding to start one, Hell first had to convince people that it could work.

“The first few funding applications were rejected,” he says. But in 1998 he finally won a grant and a position at the Max Planck Institute for Biophysical Chemistry in Göttingen, Germany. Within about a year he and his graduate student Thomas Klar obtained the first STED results: 100-nm-resolution images of a dispersion of pyridine nanocrystals.2 A year later the researchers had snapped superresolution images of living things—yeast and Escherichia coli.3 

“Hell really sensitized the world to going beyond the diffraction limit,” recalls Moerner. “Once spatial resolution rose to the top of people’s minds, it kind of opened up the floodgates.”

Moerner himself wasn’t much interested in microscopy early in his career. During the 1980s, as a scientist at IBM’s Almaden Research Center, he worked mainly on spectral hole burning in cryogenically cooled, fluorophore-doped materials. Due to variations in the fluorophore’s local environments, the doped materials’ absorption peaks are superpositions of the much narrower absorption peaks of the individual fluorophores. With a narrowband laser, then, one can alter the optical properties of fluorophores in a small frequency range and effectively create a dip, or hole, in the broader peak. Moerner and his colleagues envisioned using those holes as bits for optical storage.

To probe the fundamental limits of hole burning, Moerner sought to identify the conditions under which the discrete nature of the individual fluorophores becomes apparent as statistical fluctuations in the spectrum of the ensemble. To give themselves the best chance of seeing those fluctuations, Moerner and then-postdocs Thomas Carter and Lothar Kador used a sensitive technique known as frequency modulation spectroscopy. Instead of detecting absolute absorption rates, frequency modulation spectroscopy exploits nonlinear wave mixing to detect differential absorption rates between frequency pairs.

To the researchers’ surprise, not only did the technique reveal statistical fluctuations, some of the fluctuations corresponded to single fluorescing pentacene molecules.4 Single molecules had never been detected in the strongly scattering environs of a condensed-matter system, and Moerner’s discovery inspired a wave of similar experiments. In 1990 a group led by Michel Orrit detected emissions from single pentacene molecules.5 Later, at Bell Labs, Betzig and colleague Harald Hess detected light from single electron–hole recombination centers in cooled semiconductor quantum wells.6 

Betzig and Hess’s experiment was unique in an important way. Much like with Moerner’s doped crystals, the spectral peak of a quantum well is a superposition of the narrower peaks of individual recombination centers, but because those peaks tend to overlap, they can’t be resolved readily in the well’s spectrum. Likewise, the centers were too densely distributed in space to resolve them in microscope images. But when the researchers used a near-field microscope to collect position-dependent spectra and plotted the results in three dimensions—x, y, and wavelength λ—each recombination center formed its own, well-isolated spot.

“A lightbulb eventually popped on,” says Betzig. “If you could map a population of fluorophores in the same way—not just in xy space but also in some third dimension, say, wavelength—you might be able to isolate them in the higher dimension even if they were packed densely in xy space.” Betzig had previously shown that the coordinates of a single, isolated fluorophore could be determined to within about 10 nm by fitting its point-spread function to a Gaussian curve. (See Physics Today, May 1994, page 17.) By finding all the fluorophores’ coordinates and then collapsing them back into xy space, he figured, one should be able to map features much smaller than the diffraction limit.

Betzig formally proposed his idea7 in 1995. But as it turned out, separating fluorophores by wavelength didn’t work well enough to dramaticially improve on the diffraction limit. The approach would have to wait another decade for the breakthrough that would fully unlock its potential.

In the mid 1990s Moerner left IBM for the University of California, San Diego (UCSD), where he began experimenting with green fluorescent protein (GFP), a natural fluorophore first discovered in bioluminescent jellyfish. Roger Tsien, also at UCSD, had developed yellow fluorescing mutants of the protein, and he gave a few samples to Moerner. (Tsien later shared a Nobel Prize for his work on GFP; see Physics Today, December 2008, page 20.)

Robert Dickson, then a postdoc in Moerner’s lab, noticed unusual behavior while working with one of the mutants. The protein blinked, its fluorescence turning on and off in random intervals on the order of seconds. After several minutes, the blinking stopped, and the protein turned off completely. It wasn’t unusual for a fluorophore to go dark; most fluorophores bleach, or permanently turn off, after fluorescing for a time. More surprising was that Dickson was able to restore the molecule’s fluorescence by irradiating it with a different color light.8 

Betzig didn’t learn about the curious behavior, dubbed photoactivated switching, until 2005, while he and Hess were pitching a research proposal at Florida State University in Tallahassee. When their host, Michael Davidson, mentioned the switchable GFP, “it immediately became obvious to Harald and me that we had found the missing ingredient in my superresolution idea,” says Betzig.

The mutants that Davidson described were slightly different from Moerner’s: They had to be activated before they could fluoresce, and when they bleached they did so irreversibly. Betzig and Hess figured that with a sufficiently weak activation pulse they should be able to turn on just a few of a sample’s fluorophores at a time, so that no two point-spread functions overlap, as illustrated in figure 2. After localizing and bleaching those fluorophores, they could activate a new sparse subset and repeat the cycle. The localized images would then be combined to produce the superresolution image. Essentially, time replaced wavelength as the crucial third dimension in the imaging scheme. They dubbed the approach photoactivated localization microscopy, or PALM.

Figure 2. Pointillism writ small. Due to diffraction effects, describable through point-spread functions, fluorescing molecules appear in raw images as diffuse spots much larger than the molecules themselves. By activating only some of those molecules at any given time t, to ensure that no two point-spread functions overlap, one can fit each point-spread function to a Gaussian distribution and precisely localize the subset molecules’ coordinates. Repeating the process and then stacking the localized images reveals features much smaller than the diffraction-limited point-spread functions. (Adapted from A. Miyawaki, Nat. Rev. Mol. Cell Biol.12, 656, 2011, doi:10.1038/nrm3199.)

Figure 2. Pointillism writ small. Due to diffraction effects, describable through point-spread functions, fluorescing molecules appear in raw images as diffuse spots much larger than the molecules themselves. By activating only some of those molecules at any given time t, to ensure that no two point-spread functions overlap, one can fit each point-spread function to a Gaussian distribution and precisely localize the subset molecules’ coordinates. Repeating the process and then stacking the localized images reveals features much smaller than the diffraction-limited point-spread functions. (Adapted from A. Miyawaki, Nat. Rev. Mol. Cell Biol.12, 656, 2011, doi:10.1038/nrm3199.)

Close modal

After the Tallahassee trip, Betzig and Hess immediately got to work building the microscope in Hess’s living room. (Both men were in the midst of a multiyear hiatus from academia and were unemployed at the time.) A few months later, they carted their microscope to the National Institutes of Health, where they collaborated with Jennifer Lippincott-Schwartz and George Patterson to image biological samples. “Within six months of coming up with the concept,” says Betzig, “we had the data that ended up in the Science paper that won the Nobel Prize.”

That paper, which included 10-nm- resolution images of fibroblasts and kidney cells,9 was first published online in August 2006. The same month, Xiaowei Zhuang and her group at Harvard University published a paper describing a nearly identical technique, stochastic optical reconstruction microscopy. That December, Samuel Hess’s group at the University of Maine published results with a third technique they dubbed fluorescence photoactivation localization microscopy. Says Betzig, “Superresolution was in the air.”

Both STED and PALM have evolved since their debuts. Many STED microscopes now use continuous-wave lasers in place of the pulsed scheme in Hell’s original setup. PALM, initially designed for 2D imaging, has now been extended to 3D. Both techniques are available in commercial microscopes. Collectively, STED, PALM, and their various cousins have unveiled many of the tiniest cogs in the machinery of life, including pore structures in nuclear membranes (see figure 3a), stress fibers in tumor cells (see figure 3b), and dendritic spines in neuronal cells.

Figure 3. Under the nanoscope.(a) Nanoscale pores (green) and the proteins (red) that anchor them to the nuclear membrane are obscured in a confocal microscopy image of a frog cell but clearly visible in a stimulated emission depletion (STED) image. (Adapted from F. Göttfert et al., Biophys. J.105, L01, 2013, doi:10.1016/j.bpj.2013.05.029.) (b) So-called stress fibers, bundles of actin filaments, in a human osteosarcoma cell are visible in an image generated with photoactivated localization microscopy. (Image courtesy of Eric Betzig.)

Figure 3. Under the nanoscope.(a) Nanoscale pores (green) and the proteins (red) that anchor them to the nuclear membrane are obscured in a confocal microscopy image of a frog cell but clearly visible in a stimulated emission depletion (STED) image. (Adapted from F. Göttfert et al., Biophys. J.105, L01, 2013, doi:10.1016/j.bpj.2013.05.029.) (b) So-called stress fibers, bundles of actin filaments, in a human osteosarcoma cell are visible in an image generated with photoactivated localization microscopy. (Image courtesy of Eric Betzig.)

Close modal

“But more than that,” says Orrit, “the techniques have led to experiments that people never would have imagined doing—measuring temperature at nanoscales, measuring forces on macromolecules. I’m convinced that superresolution was indeed a revolution in optical microscopy. It’s a branch of nanotechnology that is still growing to this day.”

Stefan Hell was born in Arad, Romania, in 1962. He earned a doctorate in physics from Heidelberg University in 1990 and has held positions at the European Molecular Biology Laboratory in Heidelberg, the University of Turku in Finland, the German Cancer Research Center in Heidelberg, and the Max Planck Institute for Biophysical Chemistry, where he is director.

W. E. Moerner, born in Pleasanton, California, in 1953, attended Washington University in St. Louis, where he earned three bachelor’s degrees—in physics, electrical engineering, and math. He earned a doctoral degree from Cornell University. In 1998 he moved his lab from UCSD to Stanford University, where he is currently the Harry S. Mosher Professor in Chemistry and a professor by courtesy in applied physics.

Eric Betzig was born in Ann Arbor, Michigan, in 1960. After earning a bachelor’s degree in physics from Caltech, he completed a doctoral thesis on near-field microscopy at Cornell. Disillusioned with progress in near-field techniques, he left Bell Labs in 1995 and spent the ensuing years consulting and working for his father’s machine company. In 2006 he took a position at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, where he is currently a group leader.

1.
S. W.
Hell
,
J.
Wichmann
,
Opt. Lett.
19
,
780
(
1994
).
2.
T. A.
Klar
,
S. W.
Hell
,
Opt. Lett.
24
,
954
(
1999
).
3.
T. A.
Klar
 et al.,
Proc. Natl. Acad. Sci. USA
97
,
8206
(
2000
).
4.
W. E.
Moerner
,
L.
Kador
,
Phys. Rev. Lett.
62
,
2535
(
1989
).
5.
M.
Orrit
,
J.
Bernard
,
Phys. Rev. Lett.
65
,
2716
(
1990
).
8.
R. M.
Dickson
 et al.,
Nature
388
,
355
(
1997
).
9.
E.
Betzig
 et al.,
Science
313
,
1642
(
2006
).