The various styles of physics are so diverse that even to an expert eye they can seem to stem from foreign disciplines. Rheology and cosmology, for instance, apparently have little in common. Nevertheless, essentially all physicists share certain beliefs. Among these are the primacy of quantitative reasoning and a general agreement about concepts such as space, time, and mass (assuming, at least, that no black hole lurks nearby).
If physicists were philosophers, they would feel frustrated because their fundamental concepts lack satisfying definitions. But physicists are physicists and the conventional operational definitions—time is what one measures with a clock, space is what one measures with a meter stick, and so on—are generally good enough to get the show on the road. To assuage any misgivings about such shaky underpinnings, time and mass have been embodied in elegant primary standards. These are artifacts of beauty such as the international kilogram—a platinum–iridium cylinder within a set of glistening glass domes that reposes in a laboratory in Sèvres, near Paris—or operational definitions of stunning accuracy, such as the definition of the second—a certain number of cycles of the hyperfine frequency of cesium—that can be reproduced to almost one part in 1015.
Operational definitions of the fundamental quantities of physics are eventually codified in legal definitions. Most physicists are deeply unconcerned with, if not totally unaware of, legal definitions, but because these definitions embody fundamental concepts, it would be slightly embarrassing if one should depart from reality. That has happened. For instance, in 1960, the 12th General Conference on Weights and Measures decreed that the second is a certain fraction of the mean solar year 1900. Apparently the definition was satisfactory for astronomers but it must have riled at least a few physicists. If time is what is measured with a clock, and if the clock no longer exists, then time is in trouble.
The General Conference on Weights and Measures, which makes the final judgment on scientific standards, is an international body. As with all negotiations by international bodies, reaching agreement on a primary standard is a protracted business, sometimes so protracted that the legal definition is scientifically obsolete the moment it is adopted. This was the situation for the second in 1960. Atomic clocks that had been invented in the 1950s had already left astronomical timekeeping in the dust. Things were set right in 1967 when the 13th General Conference on Weights and Measures redefined the second in terms of the hyperfine frequency of cesium. Fortunately, the concept of time did not suffer seriously from the second’s seven-year metaphysical fling.
Space has also had its share of hard knocks. The meter first saw the light of day in August 1793 when the Republican Government of France decreed that it is one ten-millionth the distance of the Earth’s quadrant on a meridian passing through Paris. Surveyors set to work, and in a few years several platinum bars were put forward to embody the meter. Subsequent surveys showed that things were not as accurate as one might hope. So in 1889, the 1st General Conference on Weights and Measures redefined the meter to be the distance between engraved lines on a platinum–iridium bar that would rest in Sèvres. However, two years earlier, Albert A. Michelson had discovered how to adapt his interferometer to measure distance to a fraction of the wavelength of light. His method was so precise that in 1889 the meter was obsolete at the moment of its redefinition.
In spite of the superiority of Michelson’s methods, the meter-bar in Sèvres remained the legal standard for 71 years. Finally, in 1960, the meter was redefined to be a certain number of wavelengths of a particularly sharp and stable spectral line, the red line of krypton-86. As luck would have it, that was the very year the laser was invented. Lasers and laser spectroscopy rapidly rendered the new definition of the meter obsolete. The advances were spectacular, in fact so spectacular that they culminated in a disaster for the meter: It got demoted from a primary standard to the inferior rank of a derived unit.
Here is what happened. Early on it was discovered that lasers could be stabilized on certain molecular transitions so as to achieve a frequency stability and reproducibility that could exceed one part in 1010. In the 1970s, methods were developed for comparing the frequency of a laser to the frequency of an atomic clock. This immediately provided a more accurate way to measure the speed of light: Simply multiply a laser’s frequency and wavelength. The accuracy of c started to soar but then it hit a brick wall. Asymmetry in the krypton line limited the accuracy to 4 parts in 109. Furthermore, even if a more ideal spectral line could be found, the interferometers used to compare wavelengths could not achieve a precision much above one part in 1010 due to diffraction effects. In short, the accuracy with which the speed of light could be measured was limited by the accuracy with which the meter could be experimentally realized.
Underlying all measurements of the speed of light is the assumption that it has a unique and universal value that connects space and time. Thus, of the three quantities, the meter, the second, and c, two can be defined independently—whichever two are most convenient. It was apparent that the meter was not convenient. One could achieve higher precision in the measurement of length by defining the speed of light to have a convenient value and letting the meter be some fraction of the distance that light travels in one second. Consequently, in 1983, the 17th General Conference on Weights and Measures decreed that the velocity of light c is exactly 299 792 458 m/s. Hence, the meter is now a derived unit, defined to be the distance light travels in 1/c seconds.
With this new definition, the most accurate way to find the wavelength λ of light from a laser, in meters, is to measure the laser’s frequency f and simply use the relation λ = c/f. There was, however, one difficulty. Measuring the frequency of a laser, that is, measuring the frequency of light, was close to impossible.
In the past, optical frequencies were measured with what is called an optical frequency chain. The frequency of a signal derived from an atomic clock is multiplied by a modest factor, perhaps five or ten, and then a laser that happens to operate at a nearby frequency is locked to it with a known frequency offset. That laser then serves as the frequency source for the next stage of the chain, and so on up. These chains are so complicated that only a few laboratories have ever constructed one, and frequencies have been measured for a mere handful of lines. The new definition of the meter was, for many purposes, useless.
The need for a practical way to measure the frequency of light was evident from the earliest years of laser spectroscopy. The underlying urgency of the problem was not actually the definition of length, which is, after all, a legal matter. An optical frequency meter would unlock a treasure chest of ultraprecise spectroscopy and fundamental tests, and open the way to a new breed of optical atomic clocks. It would be of immediate value to a small community of atomic physicists, myself included, who were frustrated by having techniques for observing ultrasharp spectral lines but no method for measuring them. It would be such a radical advance that one could reasonably expect it to launch a new technology.
One day, inspired by the success of the Longitude Prize in the 18th century (and perhaps lightheadedness during a mountain walk), I toyed with the idea of establishing a prize for a practical way to measure the frequency of light. I dropped the idea for two reasons. The first is that from previous experiences in fundraising, I was afraid that the prize money would eventually have to come from my own pocket. The second is that I had a hunch that Theodor Hänsch, Munich’s wizard of lasers and laser spectroscopy, would solve the problem with or without a prize. I was right.
Frequency comb
Hänsch’s solution, a device called a frequency comb generator, was recently described in Physics Today (June 2000, page 19) and in this issue by James C. Bergquist, Steven R. Jefferts, and David J. Wineland (page 37). Consequently, I shall merely sketch the operation. Radiation from a mode-locked laser emerges in a stream of pulses, separated by the time T that light takes to make a roundtrip in the laser cavity. The spectrum of such a signal is a comb of harmonics separated by the pulse repetition frequency f = 1/T. The width of the spectrum is the reciprocal of the length of each pulse, which is typically tens of femtoseconds. By passing the radiation through a fiber, nonlinear effects can further broaden the spectrum so that it extends from the infrared regime across the optical spectrum.
The idea of generating harmonics from a mode-locked laser traces back to the early days of laser spectroscopy, but it was generally believed that fluctuations would destroy the coherence between the harmonics. Hänsch devised a simple method for measuring the coherence and found that it could be incredibly high. If the round trip time T is locked to an atomic clock, then the separation between lines in the comb is precisely the clock frequency. Thus the comb provides a “ruler” for measuring frequency intervals. Furthermore, if the comb extends for an octave, it can be locked so that the absolute frequency of each line is fixed at an exact multiple of the clock frequency. With such a device, the way is open to measuring essentially any optical frequency with all the precision that the best atomic clock can provide.
Hänsch demonstrated the power of the comb generator by achieving an accuracy of 2 parts in 1014 on his very first application, the 1S–2S two-photon transition in hydrogen. This exceeded the accuracy of all previous measurements by about a factor of ten, and was achieved with an apparatus that was vastly simpler than any optical chain.
The creation of the comb generator electrified the community. Optical atomic clocks with the potential for vast improvements over today’s microwave clocks were finally within reach. Progress has been incredibly rapid, so rapid that an optical atomic clock has just been created. A group at NIST led by Bergquist and Leo Holberg used a frequency comb to measure an ultraviolet transition in a trapped mercury ion to an accuracy of 1 part in 1014. Then they reversed the operation by locking the comb to the transition and created an optical atomic clock with an output at a microwave frequency. The performance is almost as good as the best of today’s atomic clocks and major improvements are close at hand.
To return to the matter of the meter, we might speculate on issues with which future meetings of the General Conference on Weights and Measures may have to deal. It seems likely that the second will be redefined as a certain number of cycles of some optical transition. But when atomic clocks start achieving accuracy in the range of 1016 to 1018, the new definition of the meter—the distance light travels in 1/c seconds—will need to be reexamined. Because of the gravitational red shift, time varies with height at the surface of Earth by about one part in 1016 per meter. Comparisons of atomic clocks between the US and Europe already correct for this effect, as does the Global Positioning System. But with the much higher precision of optical atomic clocks, comparisons of time, and hence the meter, will need to make careful reference to local gravitational potentials. In principle, this is a straightforward matter. But what if it transpires that at some level the speed of light is not a universal constant? Suppose some underlying anisotropy in space is discovered so that the relation between space and time needs to be fundamentally revised. It is difficult to fathom the consequences of such a discovery, but one prediction can be made with confidence: Some future General Conference on Weights and Measures will have to take up once again the matter of the meter.
I thank John L. Hall for helpful comments. Relevant references are R. Holzwarth et al. , Phys. Rev. Lett. 85, 2264 (2000 http://dx.doi.org/10.1103/PhysRevLett.85.2264 ), M. Niering et al. , Phys. Rev. Lett. 84, 5496 (2000) http://dx.doi.org/10.1103/PhysRevLett.84.5496 , S. A. Diddams et al. , Phys. Rev. Lett. 84, 5102, (2000) http://dx.doi.org/10.1103/PhysRevLett.84.5102 . The optical atomic clock is described in Th. Udem et al., LANL preprint server physics/0101029.
Daniel Kleppner is the Lester Wolfe Professor of Physics at MIT, and the director of the Center for Ultracold Atoms.