For millennia, cosmology has been a theorist’s domain, where elegant theory was only occasionally endangered by inconvenient facts. Early in the 20th century, Albert Einstein gave us new conceptual tools to rigorously address the questions of the origins, evolution, and fate of the universe. In recent years, technology has developed to the point where these concepts from general relativity can be substantiated and elaborated by measurements. For example, measurement of the remnant glow from the hot, dense beginnings of the expanding universe—the cosmic microwave background—is yielding increasingly detailed data about the first half-million years and the overall geometry of the cosmos (see the news story on page 21 of this issue).
The standard model of particle physics has also begun to play a prominent role in cosmology. The widely accepted idea of exponential inflation in the immediate aftermath of the Big Bang was built on the predicted effect of certain putative particle fields and potentials on the cosmic expansion. Measuring the history of cosmic expansion is no easy task, but in recent years, a specific variety of super-novae, type Ia, has given us a first glimpse at that history—and surprised us with an unexpected plot twist.
Searching for a standard candle
In principle, the expansion history of the cosmos can be determined quite easily, using as a “standard candle” any distinguishable class of astronomical objects of known intrinsic brightness that can be identified over a wide distance range. As the light from such beacons travels to Earth through an expanding universe, the cosmic expansion stretches not only the distances between galaxy clusters, but also the very wavelengths of the photons en route. By the time the light reaches us, the spectral wavelength λ has thus been redshifted by precisely the same incremental factor z ≡ Δλ/λ by which the cosmos has been stretched in the time interval since the light left its source. That time interval is the speed of light times the object’s distance from Earth, which can be determined by comparing its apparent brightness to a nearby standard of the same class of astrophysical objects.
The recorded redshift and brightness of each such object thus provide a measurement of the total integrated expansion of the universe since the time the light was emitted. A collection of such measurements, over a sufficient range of distances, would yield an entire historical record of the universe’s expansion.
Conceptually, this scheme is a remarkably straightforward means to a profound prize: an empirical account of the growth of our universe. A spectroscopically distinguishable class of objects with determinable intrinsic brightness would do the trick. In Edwin Hubble’s discovery of the cosmic expansion in the 1920s, he used entire galaxies as standard candles. But galaxies, coming in many shapes and sizes, are difficult to match against a standard brightness. They can grow fainter with time, or brighter—by merging with other galaxies. In the 1970s, it was suggested that the brightest member of a galaxy cluster might serve as a reliable standard candle. But in the end, all proposed distant galactic candidates were too susceptible to evolutionary change.
As early as 1938, Walter Baade, working closely with Fritz Zwicky, pointed out that supernovae were extremely promising candidates for measuring the cosmic expansion. Their peak brightness seemed to be quite uniform, and they were bright enough to be seen at extremely large distances. 1 In fact, a supernova can, for a few weeks, be as bright as an entire galaxy. Over the years, however, as more and more supernovae were measured, it became clear that they were a rather heterogeneous group with a wide range of intrinsic peak brightnesses.
In the early 1980s, a new subclassification of supernovae emerged. Supernovae with no hydrogen features in their spectra had previously all been classified simply as type I. Now this class was subdivided into types Ia and Ib, depending on the presence or absence of a silicon absorption feature at 6150 Å in the supernova’s spectrum. 2 With that minor improvement in typology, an amazing consistency among the type Ia supernovae became evident. Their spectra matched feature-by-feature, as did their “light curves”—the plots of waxing and waning brightness in the weeks following a supernova explosion. 3,4
The uniformity of the type Ia supernovae became even more striking when their spectra were studied in detail as they brightened and then faded. First, the outermost parts of the exploding star emit a spectrum that’s the same for all typical type Ia supernovae, indicating the same elemental densities, excitation states, velocities, and so forth. Then, as the exploding ball of gas expands, the outermost layers thin out and become transparent, letting us see the spectral signatures of conditions further inside. Eventually, if we watch the entire time series of spectra, we get to see indicators that probe almost the entire explosive event. It is impressive that the type Ia supernovae exhibit so much uniformity down to this level of detail. Such a “supernova CAT-scan” can be difficult to interpret. But it’s clear that essentially the same physical processes are occurring in all of these explosions.
The detailed uniformity of the type Ia supernovae implies that they must have some common triggering mechanism (see the box on page 56). Equally important, this uniformity provides standard spectral and light-curve templates that offer the possibility of singling out those supernovae that deviate slightly from the norm. The complex natural histories of galaxies had made them difficult to standardize. With type Ia supernovae, however, we saw the chance to avoid such problems. We could examine the rich stream of observational data from each individual explosion and match spectral and light-curve fingerprints to recognize those that had the same peak brightness.
Within a few years of their classification, type Ia supernovae began to bear out that expectation. First, David Branch and coworkers at the University of Oklahoma showed that the few type Ia outliers—those with peak brightness significantly different from the norm—could generally be identified and screened out. 4 Either their spectra or their “colors” (the ratios of intensity seen through two broadband filters) deviated from the templates. The anomalously fainter supernovae were typically redder or found in highly inclined spiral galaxies (or both). Many of these were presumably dimmed by dust, which absorbs more blue light than red.
Soon after Branch’s work, Mark Phillips at the Cerro Tololo Interamerican Observatory in Chile showed that the type Ia brightness outliers also deviated from the template light curve—and in a very predictable way. 5 The supernovae that faded faster than the norm were fainter at their peak, and the slower ones were brighter (see figure 1). In fact, one could use the light curve’s time scale to predict peak brightness and thus slightly recalibrate each supernova. But the great majority of type Ia supernovae, as Branch’s group showed, passed the screening tests and were, in fact, excellent standard candles that needed no such recalibration. 6
Cosmological distances
When the veteran Swiss researcher Gustav Tammann and his student Bruno Leibundgut first reported the amazing uniformity of type Ia supernovae, there was immediate interest in trying to use them to determine the Hubble constant, H 0, which measures the present expansion rate of the cosmos. That could be done by finding and measuring a few type Ia supernovae just beyond the nearest clusters of galaxies, that is, explosions that occurred some 100 million years ago. An even more challenging goal lay in the tantalizing prospect that we could find such standard-candle supernovae more than ten times farther away and thus sample the expansion of the universe several billion years ago. Measurements using such remote supernovae might actually show the expected slowing of the expansion rate by gravity. Because that deceleration rate would depend on the cosmic mean mass density ρm, we would, in effect, be weighing the universe.
If mass density is, as was generally supposed a decade ago, the primary energy constituent of the universe, then the measurement of the changing expansion rate would also determine the curvature of space and tell us about whether the cosmos is finite or infinite. Furthermore, the fate of the universe might be said to hang in the balance: If, for example, we measured a cosmic deceleration big enough to imply a ρm exceeding the “critical density” ρc (roughly 10-29 gm/cm3), that would indicate that the universe will someday stop expanding and collapse toward an apocalyptic “Big Crunch.”
All this sounded enticing: fundamental measurements made with a new distance standard bright enough to be seen at cosmological distances. The problem was that type Ia supernovae are a pain in the neck, to be avoided if anything else would do. At the time, a brief catalog of reasons not to pursue cosmological measurement with type Ia supernovae might have begun like this:
-
▸ They are rare. A typical galaxy hosts only a couple of type Ia explosions per millennium.
-
▸ They are random, giving no advance warning of where to look. But the scarce observing time at the world’s largest telescopes, the only tools powerful enough to measure these most distant supernovae adequately, is allocated on the basis of research proposals written more than six months in advance. Even the few successful proposals are granted only a few nights per semester. The possible occurrence of a chance supernova doesn’t make for a compelling proposal.
-
▸ They are fleeting. After exploding, they must be discovered promptly and measured multiple times within a few weeks, or they will already have passed the peak brightness that is essential for calibration. It’s too late to submit the observing proposal after you’ve discovered the supernova.
This was a classic catch-22. You couldn’t preschedule telescope time to identify a supernova’s type or follow it up if you couldn’t guarantee one. But you couldn’t prove a technique for guaranteeing type Ia supernova discoveries without pre-scheduling telescope time to identify them spectroscopically.
The list of problems didn’t stop there. The increasing red-shifting of supernova spectra with distance means that the brightness of a very distant supernova measured through a given filter is hard to compare with the brightness of a much closer supernova measured through the same filter. (Astronomers call this the if-correction problem.) Dust in a supernova’s host galaxy can dim the explosion’s light. And there were doubts that the spectra of faint distant supernovae could be reliably identified as type Ia.
In fact, the results from the first search for very distant type Ia supernovae were not encouraging. In the late 1980s, a Danish team led by Hans Nørgaard-Nielsen found only one type Ia supernova in two years of intensive observing, and that one was already several weeks past its peak.
A systematic solution
Daunting as these problems appeared, it seemed crazy to let the logistics stand in the way, when the tools were at hand for measuring such fundamental properties of the universe: its mass density, its large-scale curvature, and its fate. After all, we didn’t have to build anything nearly as formidable as the gargantuan accelerators and detectors needed for particle physics. In a project that Carl Pennypacker and I began in Richard Muller’s group at the University of California, Berkeley, just before the Danish team’s 1988 supernova discovery, we started by building a wide-field imager for the Anglo-Australian Observatory’s 4-meter telescope. The imager would let us study thousands of distant galaxies in a night, upping the odds of a supernova discovery. Contemporary computing and networking advances just barely made possible the next-day analysis that would let us catch supernovae as they first brightened.
Finding our first supernova in 1992, we also found a solution to the K-correction problem by measuring the supernova in a correspondingly redshifted filter. By playing this trick with two redshifted filter bands, one could also expect to recognize dust absorption by its wavelength dependence. But we still hadn’t solved the catch-22 telescope scheduling problem. We couldn’t preschedule follow-up observations of our first supernova, so we couldn’t obtain its identifying spectrum.
In retrospect, the solution we found seems obvious—though much effort was needed to implement it and prove it practical. By specific timing of the requested telescope schedules (see figure 2), we could guarantee that our wide-field imager would harvest a batch of about a dozen freshly exploded supernovae, all discovered on a pre-specified observing date during the dark phase of the moon. (A bright moon is an impediment to the follow-up observation.) We first demonstrated this supernovae-on-demand methodology in 1994. From then on, proposals for time at major ground-based telescopes could specify discovery dates and roughly how many supernovae would be found and followed up. This approach also made it possible to use the Hubble Space Telescope for follow-up light-curve observations, because we could specify in advance the one-square-degree patch of sky in which our wide-field imager would find its catch of supernovae. Such specificity is a requirement for advance scheduling of the HST. By now, the Berkeley team had grown to include some dozen collaborators around the world, and was called the Supernova Cosmology Project (SCP).
A community effort
Meanwhile, the whole supernova community was making progress with the understanding of relatively nearby supernovae. Mario Hamuy and coworkers at Cerro Tololo took a major step forward by finding and studying many nearby (low-redshift) type Ia supernovae. 7 The resulting beautiful data set of 38 supernova light curves (some shown in figure 1) made it possible to check and improve on the results of Branch and Phillips, showing that type Ia peak brightness could be standardized. 6,7
The new supernovae-on-demand techniques that permitted systematic study of distant supernovae and the improved understanding of brightness variations among nearby type Ia’s spurred the community to redouble its efforts. A second collaboration, called the High-Z Supernova Search and led by Brian Schmidt of Australia’s Mount Stromlo Observatory, was formed at the end of 1994. The team included many veteran supernova experts. The two rival teams raced each other over the next few years—occasionally covering for each other with observations when one of us had bad weather—as we all worked feverishly to find and study the guaranteed on-demand batches of supernovae.
At the beginning of 1997, the SCP team presented the results for our first seven high-redshift supernovae. 8 These first results demonstrated the cosmological analysis techniques from beginning to end. They were suggestive of an expansion slowing down at about the rate expected for the simplest inflationary Big Bang models, but with error bars too large to permit definite conclusions.
By the end of the year, the error bars began to tighten, as both groups now submitted papers with a few more supernovae, showing evidence for much less than the expected slowing of the cosmic expansion. 9–11 This was beginning to be a problem for the simplest inflationary models with a universe dominated by its mass content.
What’s wrong with faint supernovae?
The faintness—or distance—of the high-redshift supernovae in figure 3 was a dramatic surprise. In the simplest cosmological models, the expansion history of the cosmos is determined entirely by its mass density. The greater the density, the more the expansion is slowed by gravity. Thus, in the past, a high-mass-density universe would have been expanding much faster than it does today. So one shouldn’t have to look far back in time to especially distant (faint) supernovae to find a given integrated expansion (redshift).
Conversely, in a low-mass-density universe one would have to look farther back. But there is a limit to how low the mean mass density could be. After all, we are here, and the stars and galaxies are here. All that mass surely puts a lower limit on how far—that is, to what level of faint-ness—we must look to find a given redshift. The high-redshift supernovae in figure 3 are, however, fainter than would be expected even for an empty cosmos.
If these data are correct, the obvious implication is that the simplest cosmological model must be too simple. The next simplest model might be one that Einstein entertained for a time. Believing the universe to be static, he tentatively introduced into the equations of general relativity an expansionary term he called the “cosmological constant” (Λ) that would compete against gravitational collapse. After Hubble’s discovery of the cosmic expansion, Einstein famously rejected Λ as his “greatest blunder.” In later years, Λ came to be identified with the zero-point vacuum energy of all quantum fields.
It turns out that invoking a cosmological constant allows us to fit the supernova data quite well. (Perhaps there was more insight in Einstein’s blunder than in the best efforts of ordinary mortals.) In 1995, my SCP colleague Ariel Goobar and I had found that, with a sample of type Ia supernovae spread over a sufficiently wide range of distances, it would be possible to separate out the competing effects of the mean mass density and the vacuum-energy density 14
The best fit to the 1998 supernova data (see figures 3 and 4) implies that, in the present epoch, the vacuum energy density ρΛ is larger than the energy density attributable to mass (ρ m c 2). Therefore, the cosmic expansion is now accelerating. If the universe has no large-scale curvature, as the recent measurements of the cosmic microwave background strongly indicate, we can say quantitatively that about 70% of the total energy density is vacuum energy and 30% is mass. In units of the critical density ρ c , one usually writes this result as
A plausible, though unconfirmed, scenario would explain how all type Ia supernovae come to be so much alike, given the varied range of stars they start from. A lightweight star like the Sun uses up its nuclear fuel in 5 or 10 billion years. It then shrinks to an Earth-sized ember, a white dwarf, with its mass (mostly carbon and oxygen) supported against further collapse by electron degeneracy pressure. Then it begins to quietly fade away.
But the story can have a more dramatic finale if the white dwarf is in a close binary orbit with a large star that is still actively burning its nuclear fuel. If conditions of proximity and relative mass are right, there will be a steady stream of material from the active star slowly accreting onto the white dwarf. Over millions of years, the dwarf’s mass builds up until it reaches the critical mass (near the Chandrasekhar limit, about 1.4 solar masses) that triggers a runaway thermonuclear explosion—a type Ia supernova.
This slow, relentless approach to a sudden cataclysmic conclusion at a characteristic mass erases most of the original differences among the progenitor stars. Thus the light curves (see figure 1) and spectra of all type Ia supernovae are remarkably similar. The differences we do occasionally see presumably reflect variations on the common theme—including differences, from one progenitor star to the next, of accretion and rotation rates, or different carbon-to-oxygen ratios.
Why not a cosmological constant?
The story might stop right here with a happy ending—a complete physics model of the cosmic expansion—were it not for a chorus of complaints from the particle theorists. The standard model of particle physics has no natural place for a vacuum energy density of the modest magnitude required by the astrophysical data. The simplest estimates would predict a vacuum energy 10120 times greater. (In supersymmetric models, it’s “only” 1055 times greater.) So enormous a Λ would have engendered an acceleration so rapid that stars and galaxies could never have formed. Therefore it has long been assumed that there must be some underlying symmetry that precisely cancels the vacuum energy. Now, however, the supernova data appear to require that such a cancellation would have to leave a remainder of about one part in 10120. That degree of fine tuning is most unappealing.
The cosmological constant model requires yet another fine tuning. In the cosmic expansion, mass density becomes ever more dilute. Since the end of inflation, it has fallen by very many orders of magnitude. But the vacuum energy density ρΛ, a property of empty space itself, stays constant. It seems a remarkable and implausible coincidence that the mass density, just in the present epoch, is within a factor of 2 of the vacuum energy density.
Given these two fine-tuning coincidences, it seems likely that the standard model is missing some fundamental physics. Perhaps we need some new kind of accelerating energy—a “dark energy” that, unlike Λ, is not constant. Borrowing from the example of the putative “inflaton” field that is thought to have triggered inflation, theorists are proposing dynamical scalar-field models and other even more exotic alternatives to a cosmological constant, with the goal of solving the coincidence problems. (See the Reference Frame article by Michael Turner on page 10 of this issue.)
The experimental physicist’s life, however, is dominated by more prosaic questions: “Where could my measurement be wrong, and how can I tell?” Crucial questions of replicability were answered by the striking agreement between our results and those of the competing team, but there remain the all-important questions of systematic uncertainties. Most of the two groups’ efforts have been devoted to hunting down these systematics. 15,16 Could the faintness of the supernovae be due to intervening dust? The color measurements that would show color-dependent dimming for most types of dust indicate that dust is not a major factor. 12,13 Might the type Ia supernovae have been intrinsically fainter in the distant past? Spectral comparisons have, thus far, revealed no distinction between the exploding atmospheres of nearby and more distant super-novae. 9,12
Another test of systematics is to look for even more distant supernovae, from the time when the universe was so much more dense that ρm dominated over the dark energy and was thus still slowing the cosmic expansion. Supernovae from that decelerating epoch should not get as faint with increasing distance as they would if dust or intrinsic evolutionary changes caused the dimming. The first few supernovae studied at redshifts beyond z = 1 have already begun to constrain these systematic uncertainties. 17 (See Physics Today, June 2001, page 17.)
By confirming the flat geometry of the cosmos, the recent measurements of the cosmic microwave background have also contributed to confidence in the accelerating-universe results. Without the extra degree of freedom provided by possible spatial curvature, one would have to invoke improbably large systematic error to negate the supernova results. And if we include the low ρm estimates based on inventory studies of galaxy clusters, the Ωm-ΩΛ parameter plane shows a reassuring overlap for the three independent kinds of cosmological observations (see figure 5).
Pursuing the elusive dark energy
The dark energy evinced by the accelerating cosmic expansion grants us almost no clues to its identity. Its tiny density and its feeble interactions presumably preclude identification in the laboratory. By construction, of course, it does affect the expansion rate of the universe, and different dark-energy models imply different expansion rates in different epochs. So we must hunt for the fingerprints of dark energy in the fine details of the history of cosmic expansion.
The wide-ranging theories of dark energy are often characterized by their equation-of-state parameter w ≡ p/ρ, the ratio of the dark energy’s pressure to its energy density. The deceleration (or acceleration) of an expanding universe, given by the general relativistic equation
depends on this ratio. Here R, the scale factor of the expanding universe, can be thought of as the mean distance between galaxy clusters not bound to each other. Thus the expansion accelerates whenever w is more negative than −1/3, after one includes all matter, radiation, and dark-energy components of the cosmic energy budget.
Each of the components has its own w: negligible for nonrelativistic matter, +1/3 for radiation and relativistic matter, and −1 for Λ. That is, Λ exerts a peculiar negative pressure! General relativity also tells us that each component’s energy density falls like R −3(1+w) as the cosmos expands. Therefore, radiation’s contribution falls away first, so that nonrelativistic matter and dark energy now predominate. Given that the dark-energy density is now about twice the mass density, the only constraint on dark-energy models is that w must, at present, be more negative than −1/2 to make the cosmic expansion accelerate. However, most dark-energy alternatives to a cosmological constant have a w that changes over time. If we can learn more about the history of cosmic expansion, we can hope to discriminate among theories of dark energy by better determining w and its time dependence.
Unfortunately, the differences between the expansion histories predicted by the current crop of dark-energy models are extremely small. Distinguishing among them will require measurements an order of magnitude more accurate than those shown in figure 3, and extending twice as far back in time.
There is no shortage of type Ia supernovae; one explodes somewhere in the sky every few seconds. In principle, then, the job is simply to study a hundred times as many supernovae as we have so far. That’s a difficult but not prohibitive task, if we install dedicated wider-field imagers and improved spectrographs on dedicated large telescopes. However, it’s not just a matter of improving the quantity of measurements. The quality must also take a dramatic step forward, because the current measurement accuracy is not limited simply by statistical errors. Even with the number of supernovae we already have in hand, our statistical uncertainties are already close to the systematic uncertainties.
A new challenge
The next generation of supernova projects has already begun. Telescope scheduling committees have dramatically increased the time allotted them on the largest telescopes. With biweekly monitoring of patches of sky for several years on end at two 4-meter telescopes, it will be possible to collect almost complete light curves for hundreds of 5-billion-year-old type Ia supernovae. Smaller telescopes will study the time-varying spectra of much closer supernovae. And imagers on the HST and the 8-m Subaru Telescope in Hawaii are now revealing handfuls of 10-billion-year-old supernovae. A number of large new telescopes are devoting extensive observing programs to follow-up measurements of this plethora of supernovae. At the most extreme distances, only the Hubble telescope can just barely follow the fading supernovae, redshifted into the infrared. With this array of efforts, we may know, before too long, whether the time-averaged behavior of the dark energy is consistent with a cosmological constant.
The still harder goal of the third generation of supernova work, which also has already begun, is to look for time variations in the dark energy. For this higher-precision work, the systematic uncertainties must be reduced dramatically. The physical details of each individual supernova explosion must be pinned down with extensive and exacting spectral and photometric monitoring. Intervening dust must be measured with wavelength coverage extending into the near-infrared. Host galaxies must be classified to control for environmental effects on the type Ia standard candle. And we will have to study enough supernovae in each redshift range to take account of possible gravitational lensing by foreground galaxies that can brighten or dim a supernova.
These very exacting requirements have pushed us to work above the atmosphere and design a new orbiting optical and near-infrared telescope called SNAP (Super-Nova/Acceleration Probe). With a 2-meter mirror, a half-billion-pixel imager, and a high-throughput spectrograph, this space mission can accomplish the unprecedented suite of measurements required for measuring thousands of supernovae with adequately constrained systematic uncertainties. 18
We live in an unusual time, perhaps the first golden age of empirical cosmology. With advancing technology, we have begun to make philosophically significant measurements. These measurements have already brought surprises. Not only is the universe accelerating, but it apparently consists primarily of mysterious substances. We’ve already had to revise our simplest cosmological models. Dark energy has now been added to the already perplexing question of dark matter. One is tempted to speculate that these ingredients are add-ons, like the Ptolemaic epicycles, to preserve an incomplete theory. With the next decade’s new experiments, exploiting not only distant supernovae, but also the cosmic microwave background, gravitational lensing of galaxies, and other cosmological observations, we have the prospect of taking the next step toward that “Aha!” moment when a new theory makes sense of the current puzzles.
In references 12 and 13, I have listed in full the members on the High-Z Supernova Search and Supernova Cosmology Project teams, because each of these scientists should be recognized for important contributions to the discoveries described here. It has been both an honor and a pleasure to work closely with my SCP colleagues, who dedicated themselves to this work for years on end, providing creativity and leadership.
REFERENCES
Saul Perlmutter is a senior scientist at the Lawrence Berkeley National Laboratory and leader of the Supernova Cosmology Project.