The magnitudes of the challenges facing electron-based metrology for post-CMOS technology are reviewed. Directed self-assembly, nanophotonics/plasmonics, and resistive switches and selectors are examined as exemplars of important post-CMOS technologies. Materials, devices, and architectures emerging from these technologies pose new metrology requirements: defect detection, possibly subsurface, in soft materials, accurate measurement of size, shape, and roughness of structures for nanophotonic devices, contamination-free measurement of surface-sensitive structures, and identification of subtle structural, chemical, or electronic changes of state associated with switching in non-volatile memory elements. Electron-beam techniques are examined in the light of these emerging requirements. The strong electron-matter interaction provides measurable signals from small sample features, rendering electron-beam methods more suitable than most for nanometer-scale metrology, but as is to be expected, solutions to many of the measurement challenges are yet to be demonstrated. The seeds of possible solutions are identified when they are available.
Electron-beam-based metrology is indispensable in the manufacture of integrated circuits (ICs) and drives progress in instrumentation and data analysis.1 Transmission and scanning electron microscopes (TEMs and SEMs) reveal high-resolution structural and material composition details, perform dimensional measurements, identify and analyze structural defects, test circuits in operation, and provide insights into device behavior and causes of failure. The continuing reduction in device size in the incumbent complementary metal oxide semiconductor (CMOS) technology, the introduction of materials, such as III-Vs, Ge, carbon, and multiferroics, the increasingly three-dimensional nature of devices [e.g., FinFETs, Gate-All-Around (GAA) transistors] and architectures (e.g., 3D cross-point memories), and novel fabrication approaches (e.g., nanoimprint lithography and directed self-assembly) present a new set of metrology requirements (Fig. 1)2–5 and even sample preparation demands.6
The current state and future needs of the complementary optical, scanned probe, and electron-beam metrology ecosystem for advanced IC fabrication are detailed by Bunday et al.3 Additionally, entirely new computing paradigms, such as neuromorphic computing,7 that may permit progress to energy-efficient computation at the Landauer limit1 of kT ln 2 per bit (2.8 × 10−21 J at 300 K)2,8,9 and drive the exploration of a zoo of new device types.10 Many of these devices manipulate certain properties—state variables—other than an electronic charge, such as the electric or magnetic dipole, orbital state, or atomic configuration.2 In addition, new photonic elements now incorporated on-chip improve bandwidth and reduce energy consumption. At the single-photon level, nanophotonics will enable technology that will harness quantum mechanics, i.e., quantum entanglement, superposition, and tunneling for sensing, cryptography, and computing. Finally, biomolecules may be used as sensing elements in devices11 or, in the form of DNA, as archival memory.12,13
These new materials, devices, and architectures present a host of challenges and opportunities: needing high-resolution imaging and analytical characterization of morphology and chemistry down to the atomic scale; requiring methods to determine magnetic, electrostatic, and electromagnetic field; and necessitating techniques to observe these dynamically. One primary benefit of electron-beam methods is that electron-material interactions are strong and can be tuned by varying the beam energy. These strong interactions excite electrons both individually and collectively, generate many different signals (Fig. 2), and provide high-resolution spatial and spectroscopic information, making electrons excellent probes for all the properties listed above. Unfortunately, due to such strong interactions, an electron beam does not excite materials selectively in the way that a photon beam can and it can also significantly perturb the sample of interest. The latter factor leads to the need to strike a balance between delivering enough dose to obtain the required information while limiting the damage to the sample such that it remains representative of the structure of interest. With these caveats in mind, electron-beam-based methods are clearly not a panacea, but their versatility and availability make them a powerful suite of tools in the metrologist’s armamentarium.
Materials, devices, and fabrication techniques that will play a significant role in computing beyond CMOS include multiferroics, memristors, transistors, switches and selectors, nanophotonics/plasmonics, and directed self-assembly. These come with their own, often overlapping, metrology needs. An exhaustive discussion of these is beyond the scope of this short article. We choose the first three as representative examples. In this article, we will first discuss the metrology challenges involved in some of the materials and devices that will play a significant role in computing beyond CMOS, then give a brief survey of recent developments in instrumentation, before considering what must be done to develop the new metrologies that meet the demands of these emerging technologies.
Directed self-assembly—using lithographically patterned chemical and/or topographic features to control the ordering of a self-assembling system—is a very attractive technique for producing highly uniform nanoscale features over large areas, with good registration. Block copolymers phase segregate at the nanoscale, producing lamellar or cylindrical morphologies suitable for patterning IC features,14,15 while DNA origami can arrange objects such as carbon nanotubes,16 fluorophores,17 or biomolecules18 on a suitably patterned surface. However, using this approach in production can only occur if the assembly process can produce features of the required quality while meeting the requisite defect levels. In the case of diblock copolymer directed self-assembly, even measurement of line-edge roughness presents a new challenge.19 A more difficult question is whether or not the self-assembly process can be controlled sufficiently well to avoid defects.20–26 As a first step, it is important to be able to see defects in what are predominantly soft-material systems. For example, the thickness of block copolymer films is usually on the order of the domain spacing—20 nm–40 nm—and is large enough so that the morphology can change through the thickness. This means that surfaces may appear defect free even when there are underlying assembly errors.27,28 The challenge, then, is to be able to determine the nanoscale three-dimensional morphology of a solid, electron beam-sensitive film. Such defects can be successfully resolved using TEM tomography, when a process such as sequential infiltration synthesis (SIS) has increased the atomic number in one phase, increasing the material contrast and rendering the morphology relatively stable with respect to beam-induced damage.29,30 However, TEM is not suitable for a defect review in a production process and it is not clear if a suitable combination of operating conditions and signal detection schemes exists to make SEM a viable alternative.
Polymeric and biological materials are not the only ones sensitive to damage. Two-dimensional materials, such as graphene, hexagonal boron nitride, and transition metal dichalcogenides (Mo and W with S, Se, and Te), are an obvious class, but similar considerations can apply to many nanoscale structures, where surface-to-volume ratios are very large and the mobility of surface atoms is much higher than that in the bulk.31,32 Understanding the behavior of these materials depends on knowing not only the configuration of single-atom defects but also precise measurements of atomic positions, which in turn implies an ability to image such defects in a non-perturbative fashion. This latter requirement is particularly daunting since, under many standard imaging conditions, it is impossible to tell if the sample is in a pristine state.
Photonic devices such as resonators and waveguides have dimensions on the order of several wavelengths of light, but their performance is sensitive to variations in size and shape and to surface roughness at the nanometer scale.33 The mismatch between feature size and tolerance makes metrology difficult. For example, a silicon microring whispering gallery resonator may have a diameter of 15 μm, a ring cross-sectional width of 0.4 μm, and a thickness of 0.3 μm, but to control the resonant wavelength to within 1 nm, the diameter must be controlled to less than 10 nm, the ring-cross sectional width to within 2 nm, and the sidewall angle to within 0.2°.34 For low propagation loss (e.g., 0.1 dB/cm), surface roughness at the 1 nm level may be required.35–37 Unlike electronic devices, the absolute size of photonic devices is tied to an external length scale, defined by the operating wavelength. Any deviation in dimension from the design translates into a variation in operating wavelength, which can only be compensated for by using tunable sources or components, through post-fabrication trimming processes, or through on-chip tunable elements such as heaters, all of which come at the expense of cost, complexity, or operating power. For example, a temperature change of ≈10 °C is needed to achieve a 1 nm change in operating wavelength.38,39 Consequently, both precision and accuracy are critical in fabrication, and therefore in metrology. In a photonic crystal structure, which resembles an array of via holes, a 5 nm deviation in hole diameter from a nominal value of 170 nm leads to a 10 nm shift in operating wavelength.40 In addition, photonic crystal structures may have non-circular features, meaning that shape measurements may also be required—Already an issue in metrology of lithographic masks containing optical proximity effect correction features.41,42
The materials used in photonic devices also present difficulties: low-loss waveguides are often fabricated in insulating materials, such as SiO2 or Si3N4, leading to charging-induced image distortion. While some chip-scale photonic structures are built using materials compatible with Si, others require heterogeneous integration using, e.g., III-V materials.2 For example, on-chip quantum information processing relies on high-quality single-photon sources, such as InAs quantum dots in GaAs. Precise alignment of these sources to the passive photonic circuit components is critical for efficient coupling.43
While photonic devices have sizes comparable to the wavelength of light, plasmonic structures permit light to be confined to deep sub-wavelength dimensions and are scaled accordingly. They thus offer the promise of photonics-level bandwidth but with electronics-level density.44–46 However, the small sizes of these devices render their performance proportionately dependent upon small deviations from the designed dimensions. We also note that the surface sensitivity of plasmonic and photonic structures that makes them so useful for biomolecule detection47 renders them sensitive to any surface modification.48 Accurate correlation between optical behavior and measured dimensions therefore requires that the measurement method produces minimal contamination.
RESISTIVE SWITCHES AND SELECTORS
Emerging classes of two-terminal devices such as resistive switches (memristors, valence-change memory, phase-change memory, electrochemical metallization cells, magnetic tunnel junctions, ferroelectric tunnel junctions, etc.) as well as two terminal threshold switches (both chalcogenide and oxide-based) present new opportunities and challenges for electron-beam based characterization in the post-CMOS world.49,50 Resistive switches have recently been commercialized as digital memories and are also finding applications as both analog memories and artificial synapses in neural networks.49,51,52 Threshold switches are already used as selectors in memory arrays and are actively being investigated as relaxation oscillators in new oscillatory, phase based non-Boolean computers.53,54
To achieve compatibility with the back-end-of-line (BEOL) manufacturing processes used in semiconductor foundries, these devices often utilize an adventitious dielectric breakdown-based “forming process” to selectively crystallize or nucleate filaments in the devices, achieving local temperatures of hundreds of degrees.55,56 The large voltages and high speeds of these processes are well documented to produce capacitive current overshoots, which can lead to dramatic device damage.57,58 This problem is typically solved in integrated CMOS systems51,59,60 but is challenging to observe within the limited bandwidth of a conventional TEM or SEM system.61 This issue persists even after forming since the most useful classes of devices are known to operate over orders of magnitude in time scales down to sub-nanosecond switching speeds.62–65 Consequently, many investigations are limited to ex situ characterization,66–68 but, since the switching effects are believed to involve dramatic and dynamic local structural changes, significant information is lost when looking at the static device, particularly information related to endurance and ageing—both of which are known to be limiting factors in these systems.69,70 This issue is critically acute in the case of threshold switches since the device operational state is completely absent without an applied bias. More importantly, the introduction of confounding artifacts from device handling and lamella extraction is an unavoidable concern. Nevertheless, in situ experiments have become increasingly common across all classes of devices, and these challenges are being met.71–77
In situ and ex situ characterization of some classes of devices has been especially successful due to the presence of strong Bragg reflections (due to distinct crystalline and amorphous phases73 in phase-change memories) as well as combined Z-contrast and Bragg reflections (in electrochemical metallization cells due to clear distinguishability of Ag or Cu from the matrix, e.g., Fig. 3).74,75 The distinct phase and compositional differences in these materials make the experimental interpretations of these systems straightforward. Certain classes of resistive switches, particularly oxide-based resistive switches, are more challenging since the changes are subtler and thought to involve migration of cations or oxygen vacancies—species which can move coherently within the lattice and induce resistive changes even in the absence of an obvious change in material phase.
A wide range of techniques, including electron energy-loss spectroscopy (EELS),68,77 Auger electron spectroscopy (AES),77 energy-dispersive x-ray spectroscopy (EDX),78 and electron holography,71 in addition to Bragg contrast76,79 have all been employed, sometimes leading to conflicting results between studies—a problem likely caused by both significant experimental differences and the relatively weak signals produced by the often subtle material changes.66 Very strong contrast has been observed in electron beam-induced current (EBIC) observations of these systems with low energy electrons,80 and recent studies have provided new insights into the complex image formation processes involved.81,82 Though for some systems the interpretation is simple, it is probable that, for many devices and materials, it will be necessary to engage in complex, multi-modal analysis utilizing all possible signals over the widest possible range of electron-beam energies. This will become especially challenging when extending these techniques for at-line or in-line measurements for fast, reliable characterization to feedback into the fabrication process. Electron beam-induced damage, particularly through electrostatic discharge, is a concern. For certain device classes, some measurements, such as EBIC, only require low energy electrons. In this instance, only the top metal layers are penetrated, minimizing device damage.
These challenges may already be evident even for extant technologies, currently employed in niche applications, like ferroelectric tunnel junctions in embedded memories.83 Continuous tunability of ferroelectric domains has been demonstrated and is useful for the implementation of spiking neuromorphic networks.84,85 Continuous variability of the resistive state poses new challenges for this class of materials which have historically been used for single-bit operation as well as with a fast, destructive read. In situ measurements will be critical for understanding ferroelectric domain nucleation (which must be controlled and domain sizes minimized) as well as the lifetime and failure mechanisms when subjected to the long-term low biases needed for operation in neuromorphic networks in addition to high-bias programming pulses.
DIMENSIONAL METROLOGY BASICS
Similar measurement problems exist for all the examples of emerging materials, devices, and architectures discussed above. The challenges for metrology now comprise not only the types of critical dimension (CD) measurements familiar to the IC industry, in which repeatability and precision at or below the instrument’s spatial resolution have been a routine part of manufacturing process, but also expand to include high-precision and high-accuracy measurements over larger length scales, shape measurement, combined compositional, structural, morphological, and spectroscopic measurements, measurements of magnetic, electrostatic, and electromagnetic fields, and the option to perform all of the above dynamically. Below, we review some of the considerations that apply to basic dimensional measurements and discuss limitations and recent advancements in signal generation, detection, and analysis, and their application to more complex measurements.
Dimensional measurements rely on combinations of position estimates of those boundary surfaces that define features. For example, pitch or periodicity is the difference between a feature and its nearest periodic equivalent, height is the difference between an object’s top surface and the surface on which the objects rest, and size is the difference between opposite boundary surfaces. Each observable surface divides dissimilar materials and is therefore inherently asymmetrical, leading to systematic errors in the measurement of its position. These errors depend upon how the measuring tool senses the surface. The importance of this probing error in a compound measurement varies and is expected to be least in a displacement measurement and worst in a size measurement, where instead of cancelling, the mirroring between, e.g., left- and right-facing edges causes the systematic error to double. Between these extremes, pitch (where surfaces have random small departures from the same average shape and orientation) and height (where surfaces may differ systematically in shape or composition but are facing in the same direction) are intermediate. The position of a boundary must always be inferred from the instrument’s signal (e.g., intensity vs. position), which is determined by both the intrinsic resolution of the instrument, and by multiple aspects of the sample—composition, edge position and shape, proximity of neighboring features, etc.—that affect it. These confounding effects can be separated, provided we have a good theoretical understanding of the probe-sample interaction, and that we use all information from the entire signal profile rather than only, e.g., an intensity threshold-crossing.
CONTAMINATION, DAMAGE, AND CHARGING
Of course, our understanding of the probe-sample interaction is predicated on the notion that we have some idea of what the sample consists of before we begin our measurement, and, importantly, that it remains in that initial state during the measurement. Due partly to the very small number of atoms comprising the interesting features of the sample, it is critical to ensure that both the electron microscope and the sample are kept clean. Depositing an unknown amount of electron beam-induced carbonaceous contamination while measuring the sample hampers or prevents the collection of reliable information. Fortunately, there are known solutions for this problem: by introducing low-energy oxygen, hydrogen or helium plasma, or UV light86 and employing good sample preparation and handling practices, it is possible to achieve such cleanliness that even hours of continuous electron beam exposure will not deposit any noticeable contamination.87,88 For many types of e-beam measurements, ultra-high cleanliness is sufficient, but for some (e.g. Auger microscopy), ultra-high vacuum is also necessary because even a monomolecular layer of water or other material on the top of the sample alters the results unacceptably.
Beam-induced sample modification is another significant problem. In the SEM, typical fluences (number of electrons per area) are ≈104 nm−2 s−1, while in the (S)TEM they may range from ≈105 nm−2 s−1 to ≈108 nm−2 s−1, and the energy deposited into the sample region of interest can vary by many orders of magnitude, depending on beam energy-dependent energy loss and sample thickness. As discussed earlier, soft materials and nanostructures of all kinds can be particularly susceptible to damage. However, since many material damage processes are electron dose rate dependent, if the dose can be delivered sufficiently slowly, then excited states can relax before the next excitation and thereby reduce the probability of sample degradation.89 Under these circumstances, it may be possible to image for an almost indefinite time and build up the requisite statistics. As attractive as this possibility is, it places additional requirements on instrument stability, cleanliness, and detector dark counts.
Surface and internal charging alters the energy and trajectory of the electrons leaving the sample. The resulting unwanted image displacements and distortions are particularly problematic in SEM systems. The combination of large sample thicknesses and high, material- and angle-dependent secondary electron (SE) yields at typical SEM beam energies leads to complex, dynamically varying phenomena, motivating the development of numerous beam scanning approaches to mitigate them. Sample charging effects may also be observed in the TEM in cases where large numbers of secondary electrons are produced, by, for example, beam gas interactions, in in situ measurements.90 While this phenomenon is deleterious in the context of the TEM, it can be used for charge management in the SEM: in the presence of ≈100 Pa water vapor, ions generated by the primary beam electrons above the sample surface compensate for the negative charge produced by electron irradiation. With a suitably implemented detector, signal amplification is ensured even at short working distance.91 As a positive feature, charging may sometimes enable novel contrast mechanisms that reveal new or otherwise invisible information about the structure of interest. For example, carbon nanotubes embedded in epoxy can create conductive pathways to ground, locally modifying the surface potential, and thus secondary electron yield, making it possible to “see” much deeper into the sample than the secondary electrons escape depth (<20 nm) would otherwise suggest.92
Once a signal that is representative of a sample has been obtained, an estimate of the structure of that sample can be made. The quantities of interest—the measurands—are only indirectly related to the directly measurable signals (e.g., electron count vs. position in an image), so assignment of values to the relevant measurand requires us to know (or assume) some relationship between that measurand’s value and the detected signal. The truer the assumed relationship is, the more accurate the assigned value is. A more rigorous model can predict errors that would be made with a less sophisticated model93 or it can improve the accuracy of a measurement.94–97 Such an improved measurement may take the form of a least squares fit of the simulated model-based signal to the measured one with the measurand and values of secondary characteristics among the fit parameters. In this way, estimates for all unknown parameters (e.g., size, shape, and composition) that affect the signal are obtained. To the extent the effects are uncorrelated, their causes will be separable. Standard statistical methods can be used to estimate the extent of correlations and their effect on the uncertainty of the assigned values, while advanced methods, such as compressed sensing,98 may reduce dose requirements.
Rigorous modeling of the sample/instrument interaction is necessary to reduce probing error. As such, it is most important for measurements for which there is little or no cancellation of systematic errors (e.g., size and height). Incorrect models generally produce biased measurements, and the bias may be larger than other errors.99,100 The amount of bias is generally a function of secondary sample characteristics, so it affects the accuracy even of comparative measurements. In addition to correcting bias, rigorous models improve measurement repeatability because fitting a curve uses more of the collected information than does a threshold crossing, thereby averaging the assigned parameter values over many image pixels.94 Models aid the optimization of a measurement by predicting the measurement mode that has the lowest uncertainty for a given measurand. Modeling is also necessary to realize the promise of combined or “hybrid” metrology, measurement by combining techniques with complementary strengths;101–103 e.g., optical metrology is sensitive to a feature’s height, while SEM is sensitive to its lateral size. The results of a combined measurement may be more accurate than with either method used alone. However, if the techniques have different uncorrected biases (“methods divergence”),104 their combination is correspondingly uncertain, and much of the advantage of using independent information about the same attribute is lost.
In transmission (TEM, STEM) and backscattered electron (BSE) imaging modes, the signal comes from electrons with energies in the range ≈1 keV < E < 300 keV, where the scattering physics is relatively well-known.105 In the SEM, on the other hand, the secondary electron signal has been preferred for imaging and critical dimension metrology in the semiconductor industry for reasons likely to carry over to emerging technologies: its availability even for thick (non-transmitting) samples, its higher signal (peak SE yields ≈1), and the short SE range (escape depths ≈1 nm) that improves spatial resolution relative to that of BSE. This represents an important challenge for rigorous modeling; however, because SE generation and scattering are much less well understood at energies ≤1 keV. Elastic electron atom scattering is usually treated as Mott scattering,106,107 but this must break down as the electron wavelength approaches the interatomic spacing. The best secondary electron generation models are based on the differential inelastic cross section of Pines,108 but the cross section depends on the complex energy- and momentum-dependent dielectric function of the sample material. Since this function, particularly its momentum dependence, is incompletely known (it is usually measured only optically, at 0 momentum transfer), different approximations are in use, some with one dispersion relation and some with another, some based on separating the dielectric function into Lindhard and others into Mermin components, and some with and others without exchange and correlation effects.109–114 Differences in calculated inelastic mean free paths among most of these models are insignificant for E ≳ 200 eV but may be greater than 50% when E < 50 eV, a region important for the SE cascade since energy losses tend to peak near the plasmon energy (typically at around 20 eV). Whichever approximation one chooses, the theory is a theory of primary electron energy loss and momentum change. The corresponding estimated energy and momentum of the SE depend on how one assigns the initial state of the target electron. Obtaining data good enough to guide model selection is also a challenge. To determine which of the available models most closely approximates nature it is necessary to test the models particularly at the low primary electron beam energies where the model predictions differ. Data on individual scattering events (e.g., inelastic mean free paths) are more informative for this purpose than data (e.g., SE yields) that reflect the result of a chain of events. The former is only sparsely available for energies below 50 eV. Although the latter information is more plentiful,115 there is considerable inter-laboratory variation.
With a clean, unperturbed sample, and effective models in hand, a great deal of information can be extracted from a limited set of data. Model-based fitting, stereo photogrammetry, and shape-from-shading are non-destructive SEM techniques that have demonstrated reconstruction of 3D surface topography. Model-based fitting determines the parameter values of a model shape,94,95,116 and sub-1 nm accuracy117 has been demonstrated for shapes with relatively few parameters. Stereo photogrammetry is 3D reconstruction based upon identification of homologous points in images of the sample from two or more perspectives.118–120 Accuracy at the nanometer scale has so far been elusive121,122 and typically requires surface texture across images to identify homologues. Shape-from-shading123 relies upon brightness variation with angle, sometimes using multiple detectors at different positions. It is suitable for samples with smooth surfaces and has been applied mainly to micrometer-scale structures but has more recently been used for conical nano-structures.124 3D volumetric reconstructions for thick, non-transmitting samples can be obtained by alternating SEM imaging and focused ion-beam (FIB) milling in a dual-beam FIB/SEM to remove successive thin layers,125 though this technique has yet to demonstrate sub-5 nm resolution.126 High-resolution 3D reconstructions of electron-transparent samples can be performed by tomography in the TEM,127 and atomic resolution has been demonstrated.128,129
SIGNAL GENERATION AND DETECTION: LIMITS AND RECENT ADVANCES
So far, we have discussed the factors that affect the quality of a measurement, including the need to ensure that the sample is, and remains representative of the structures of interest during a measurement, and the essential role that modeling plays both in our understanding of how the detected signal depends on the interaction of probe and sample, and how to reconstruct the sample geometry from that detected signal. In what follows, we consider some of the instrumental constraints and recent developments that affect signal generation and detection, and the new types of measurements these developments are enabling.
The fundamental limit of uncertainty in determining the position of a feature, be it a single atom, or a line edge, depends on the signal-to-noise ratio (SNR). In a typical shot-noise-limited situation, where the incident electron statistics, signal generation, and signal detection can all be represented as Poissonian processes, the noise/signal (σ/μ) is given by √(Ne−1 + Ng−1 + Nd−1), where Ne, Ng, and Nd, are the mean numbers of incident electrons, signal generation events, and detection events, respectively.130 This relationship illustrates that the process with the lowest occurrence probability dominates the SNR (the same considerations apply in lithography).131 The beam current determines the number of incident electrons, and the availability of high-brightness monochromated sources enables the formation of high-current, sub-nanometer diameter probes. In addition, the high degree of coherence in modern field-emission sources is leading to novel imaging approaches, such as the use of vortex beams, in conjunction with electron energy-loss spectroscopy to map magnetic and plasmonic behavior at the nanoscale.132 However, as exciting as these developments are, space-charge effects133 will always limit the maximum current available in a single probe, and thus system throughput. This constraint has driven the development of multi-beam systems for imaging134 as well as lithography.135,136 As noted above, the incident beam energy strongly influences the number of signal generation events. The number of detection events depends on both the quantum efficiency of the detector, and the solid angle the detector subtends relative to the signal emission solid angle.
At first sight, it might appear that the number of incident electrons, given by IB × td, where IB is the beam current, and td is the beam dwell time, can accumulate over any time interval, allowing atomic-scale accuracy and repeatability. However, there are many dynamic processes that influence the choice of these two parameters. Those that depend on the instrument include vibrations, drifts, disturbing stray fields, and electron beam-induced contamination. Atomic resolution has been possible for decades with (S)TEM but has been achieved only recently with the SEM. This lag in performance can, in large part, be attributed to the differences between the samples and sample stages in the two types of instruments: in (S)TEMs a very small sample secured in a sturdy but lightweight sample holder/stage, is bolted into the heavy electron optical column, so their independent motion is extremely limited, while the SEM must accept large samples, which require long-range, many-degrees-of-freedom motion. Atomic-resolution imaging requires frame-to-frame stability on the order of 50 pm, which is much more difficult in the SEM. Fast laser interferometry, vibration-free laminar-flow of the lens cooling liquid, and advanced scanning methods can tremendously improve both accuracy and repeatability. While any remaining mechanical drift can be compensated for by using high-frame-rate imaging and autocorrelation image registration methods,137 the success of these methods depends on having sufficient information available in the frame, which in turn depends on the number of events detected; thus, mechanical stability, beam current, detector efficiency, and speed are linked.
Careful choice of beam parameters maximizes the probability of generating signal quanta, but detector type and configuration dramatically affect the level of information that can be extracted. Recently, for example, it has become common for SEMs to have both side and in-lens secondary electron detectors—the former provides a strong level of topographic contrast, while the latter can provide, at suitably short working distances, highly efficient and largely uniform signal collection. However, both detectors are single point detectors. More complex detectors that partition the signal by energy or position enable much more sophisticated measurements, though at the expense of correspondingly larger doses. Replacing the standard combination of bright-field and annular dark-field detectors in a STEM with an all-field segmented detector138 enables differential phase contrast (DPC) imaging at atomic resolution,139 and permits imaging of fields within, or inner potentials of, a material.140,141 These detectors have been used to measure the electrostatic fields in a p-n junction (e.g., Fig. 4),142 or in-plane magnetic fields.143 This imaging mode does not require the defocus needed for Fresnel/Lorentz microscopy, and the data can be less noisy than that obtained from holography, which requires differentiation of the holographic potential image to determine field strength. It is also important to realize that the results of such measurements may be affected by sample preparation artefacts and the proximity of surfaces.144 For a recent review of field-measurement methods in the (S)TEM see Zweck.145
As the spatial resolution of today’s SEMs approaches that of yesterday’s TEMs,146 many signal collection and detector concepts are being borrowed from the TEM world, which are drawn in their turn from high-energy physics. Recently, a variety of pixelated detectors has become available.147–149 The Medipix and Timepix detectors arose out of detector technology created for the Large Hadron Collider (LHC).150 They are hybrid detectors, in which a pixelated detector layer is bonded to a similarly pixelated application specific integrated circuit (ASIC) readout layer. Both global and individual pixel thresholds can be set. While Si is often used as the detector layer, higher atomic number materials can be used for higher energy quanta, and they may also employ detector layers such as multi-channel plates (MCPs), or gas detectors. Such hybrid systems provide many different types of information and have been used to generate images of the STEM central diffraction disc, leading to improved DPC imaging of magnetic contrast.151 More generally, engineering such detectors to have a very high dynamic range enables full diffraction patterns to be recorded at each beam position in STEM, which means that post-processing the data can be used to emulate any specific imaging mode, or combination thereof.152 In the SEM, backscattered electrons with energies as low as 3 keV have been detected, and, in an energy-filtered mode, made possible by the signal thresholding capability of these detectors, electron backscatter diffraction patterns (EBSPs) can be collected, for light elements especially, with significantly improved resolution.153 While not yet explored in depth, it is clear that such detectors offer the opportunity to examine backscattered electron signals in the SEM in unprecedented detail with respect to energy and angular distribution. A new generation of back-thinned, radiation-hard CMOS direct electron detectors has resulted in a revolution in the field of structural molecular biology, by enabling low-dose cryo-electron microscopy of biomolecules.154 These direct-electron detectors are beginning to also have an impact in materials science, in, for example the nanoscale mapping of strain in semiconducting materials155 and field measurements in individual magnetic nanoparticles.156 The resolution of these detectors is improved because the thinned area eliminates a large fraction of backscattered electrons that would otherwise be detected in pixels adjacent to the primary electron’s pixel of impact. They have sufficiently fast readout speeds to operate in electron counting mode at beam currents sufficient to permit sub-pixel resolution point-of-impact calculations that improve the effective detector modulation transfer function (MTF) and detector quantum efficiency (DQE). We also note that in (S)TEM instruments, post-sample beam modification using electron phase plates157–159 or exit-wave reconstruction methods dramatically improves the SNR from weak phase objects, further reducing dose requirements, and enhancing their ability to image biomolecules and other low atomic number or nanoscale materials.
The rich array of signals that are available in electron-beam instruments provide exciting opportunities for multimodal imaging to enable direct correlations between device performance and structure. For example, in heterogeneous nanophotonic devices, combining a cathodoluminescence (CL) signal with a topographic one would enable in-process metrology of heterogeneous integration device fabrication processes, or allow the correlation of a photonic crystal’s band structure with device dimensions, and would be a useful complement to existing optical schemes.160–162 Similarly, EELS and CL provide complementary views of the behavior of plasmonic structures, with EELS able to produce a map, containing information about both bright and dark modes, of the electromagnetic local density of states (EMLDOS) along the incident electron trajectory, while CL reveals bright modes only.163 While EELS may be viewed as the more powerful and sensitive technique, with signal generation scaling with object volume, it requires an electron-transparent sample substrate, and is thus more likely useful for understanding device behavior in a research context. By contrast, the CL signal scales as volume squared, motivating the development of highly efficient collection optics,164 but can be collected from devices on solid substrates,165 and, as long as the optically active device is thin, high incident beam energies can be used to minimize interaction-volume effects that would otherwise degrade the spatial resolution.
The ideas discussed above represent just a few of the many new possibilities for electron beam-based metrology. The ever-expanding collection of materials, devices, and architectures for beyond-CMOS technologies will require novel and creative measurement solutions, such as the quantum electron microscope.166 In this review, we have endeavored to indicate where the challenges lie, the current state of the art, and some future directions. Fortunately, sources, electron optics, and detectors are continually improving, enabling the acquisition of more complex data sets with higher resolution and at greater speeds than ever before. Progress is also being made in the development of more rigorous models of beam-sample interactions, enabling more accurate estimates of structure and composition to be made using the minimum possible dose. While the demands on electron beam metrology become ever greater, the sheer diversity of electron-matter interactions and experimental modes suggests that this family of techniques will be more than adequate to meet the future demands of leading-edge device metrology.