The earlier columns in this series recounted extraordinary triumphs of physicists using analysis and synthesis or, alternatively, (the dreaded epithet) reductionism to account for the behavior of matter and the structure of the universe as a whole. But I left off by admitting that we scored a bull’s-eye only by painting the target around our shot. There’s quite a lot that physicists might once have hoped to derive or explain based on fundamental principles, for which that hope now seems dubious or forlorn. In this column I explore some sources of these limitations and the role different sorts of explanations might play in filling the voids.

One important limitation, which we started to explore last time (Physics Today, Physics Today 0031-9228 5610200310 https://doi.org/10.1063/1.1628983October 2003, page 10 ), concerned the lack of a principle that could lead to a unique choice among different seemingly possible solutions of the fundamental equations and could select out the universe we actually observe.

To get oriented, it’s very instructive to consider the corresponding problem for atoms and matter. Like the classical equations governing planetary systems, the equations of quantum mechanics for electrons in a complex atom allow all kinds of solutions. In fact, the quantum equations allow even more freedom of choice in the initial conditions than the classical equations do. The wavefunctions for N particles live in a much larger space than the particles do: They inhabit a full-bodied 3N-dimensional configuration space, as opposed to 2N copies of three-dimensional space. (For example, the quantum description of the state of two particles requires a wavefunction ψ(x1, x2) that depends on six variables, whereas the classical description requires 12 numbers, namely their positions and velocities.)

Yet the atoms we observe are always described by the same solutions—otherwise we wouldn’t be able to do stellar spectroscopy, or even chemistry. Why? A proper answer involves combining insights from quantum field theory, mathematics, and a smidgen of cosmology.

Quantum field theory tells us that electrons—unlike planets—are all rigorously the same. Then the mathematics of the Schrödinger equation, or its refinements, tells us that the low-energy spectrum is discrete, which is to say that if our atom has only a small amount of energy, then only a few solutions are available for its wavefunction.

But because energy is conserved, this explanation begs another question: What made the energy small in the first place? Well, the atoms we study are not closed systems; they can emit and absorb radiation. So the question becomes: Why do they emit more often than they absorb, and thereby settle down into a low-energy state? That’s where cosmology comes in. The expanding universe is quite an effective heat sink. In excited atoms, energy radiated as photons eventually leaks into the vast interstellar spaces and redshifts away. By way of contrast, a planetary system has no comparably efficient way to lose energy—gravitational radiation is ridiculously feeble—and it can’t relax.

So one selection principle that applies to many important cases is to choose solutions with low energy. In the same spirit, when the residual energy can’t be neglected, one should choose thermal equilibrium solutions. This is appropriate for systems and degrees of freedom that have relaxed, but is not appropriate in general.

The selection procedure that dominates the literature of high-energy physics and string theory is energy based, along the lines we just discussed for atoms. It is traditional to identify the lowest-energy solution with the physical vacuum, and to model the content of the world using low-energy excitations above that state. For the solution to count as successful, the excitations should include at least the particles of the standard model.

One can’t seriously quarrel with this selection procedure as a practical necessary criterion. The universe does appear to be a low-energy excitation around a stable state. But why? For atoms, selection of low-energy solutions was justified by their tendency to dissipate energy into radiation that ultimately finds its way into interstellar space and does not return. That mechanism can’t work for the universe as a whole, of course, but a different form of dissipation comes into play, depending on a different aspect of cosmology. The universe has been expanding for a long time—a very long time, many orders of magnitude longer than the natural time scales associated with elementary interactions.

For present purposes, the mismatch of time scales has two profound consequences. First, as the universe expands, the matter in it cools. We can think of the cooling as a sort of radiation leak into the future. In any case, it takes the universe toward a local minimum of the energy. Second, the mismatch gives many sorts of instabilities time to play out. (Not all, however. Instabilities involving quantum tunneling can have absurdly long time scales, and as a logical possibility they might figure in our future.) Altogether, then, the vastness of cosmological time serves to justify focus on low-energy excitations around stable “vacuum” configurations that are at least local minima of the energy density.

This answer to the question of why the universe can be described as a low-energy excitation, though, frames a new question: What sets the cosmic time scale to be so different from the scale of fundamental interactions? And that question is a thinly veiled form of the question, Why is the cosmological term so small?, for which we have no good answer! In that same dark neck of the woods lurks the specter of the observed dark energy, which might well expose the provisional nature of our selection criteria. Indeed, a leading hypothesis for this energy, known under the rubric of “quintessence,” is that it reflects a difference between empty space as it actually exists and the ideal stable vacuum.

In string theory, as currently understood, the selection problem poses severe foundational challenges, because in string theory the criterion of stability is by no means sufficient to select a unique solution. Many stable solutions apparently exist, including flat 10-dimensional spacetime, anti-deSitter space that supports a huge negative cosmological term, and tens of thousands of other possibilities the vast majority of which bear no resemblance whatsoever to the world as we know it. The pragmatic response, which dominates the literature, has been to add phenomenological inputs (effective spacetime dimension, gauge group, number of families, and so on) to the selection criteria. One can then try to derive relations among the remaining parameters. Success along those lines could be very significant, reassuring us that the theory was on the right track. So far, that hasn’t happened. But even if it did, it would hardly fulfill the promise of a final analysis of nature.

So, how does the solution that actually describes the world get selected out? It remains a deep question.

The analysis and synthesis program, in the course of its pursuit, exposed other limitations. Consider these three questions, whose elucidations were once considered major goals for science:

  • Why is the Solar System as it is?

  • When will a given radioactive nucleus decay?

  • What will the weather be in Boston a year from today?

They are questions that suffered an unusual fate: instead of being answered, they were discredited. In each case, the scientific community at first believed that unique, clear-cut answers would be possible. And in each case, it was a major advance, with wide implications, to discover that there are good fundamental reasons why a clean answer is not possible.

I discussed the solar-system question in my October column, with reference to Johannes Kepler’s struggles. That question is questionable because it suffers from a problem I call projection. If there were only one solar system, or if all such systems were the same, tracing that system’s properties to a unique cause would be important. Those might have once seemed reasonable hypotheses, but now we see that they are illegitimate projections that ascribe universal significance to what is actually a very limited slice of the world. There are serious premonitions that we physicists might need to relearn this lesson on a cosmic scale. Popular speculations that we live in a “multiverse” or inhabit “braneworlds” involve the idea that the world is extremely inhomogeneous on large scales—that the values of fundamental parameters, the structure of the standard model, or even the effective dimensionality of spacetime varies from place to place. If so, attempts to predict such things from fundamentals are as misguided as Kepler’s polyhedra.

The second question, of course, was rendered questionable by quantum mechanics. The precise time that any particular radioactive nucleus will decay is postulated to be inherently random—a postulate that has acquired, by now, tremendous empirical support. Erwin Schrödinger, with his macroscopic cat, thought he was reducing quantum mechanics to absurdity. But the emerging standard account traces the origin of structure in the universe back to quantum fluctuations in an inflaton field! If that account holds up, it will mean that attempting to predict the specific pattern of structures we observe from fundamentals is as misguided as trying to predict when a nucleus will decay or a Schrödinger cat expire.

The third question was rendered questionable by the discovery of chaos: the phenomenon that the solution of perfectly deterministic, innocuous-seeming equations can have solutions whose long-term behavior depends extremely sensitively on exquisite details of the initial conditions. Chaos raises another barrier that can separate ideal analysis from successful synthesis.

It is possible, I suppose, that the apparent limitations will prove illusory and that, in the end, the vision of a unique, deterministic Universe fully accessible to rational analysis, championed by Baruch Spinoza and Albert Einstein, will be restored. But to me it seems wise to accept what appears to be overwhelming evidence that projection, quantum uncertainty, and chaos are inherent in the nature of things, and to build on those insights. With acceptance, new constructive principles appear, supplementing pure logical deduction from finegrained analysis as irreducible explanations of observed phenomena.

By accepting the occurrence of projection, we license anthropic explanations. How do we understand why Earth is at the distance it is from the Sun, or why our sun is the kind of star it is? Surely important insights into these questions follow from our existence as intelligent observers. Some day, we may be able to check such arguments by testing their predictions for exobiology.

By accepting quantum uncertainty, we license, well … quantum mechanics. Specifically, in the spirit of this column, we can test the hypothesized quantum origin of primordial fluctuations by checking whether those fluctuations satisfy statistical criteria for true randomness.

By accepting the implications of chaos, broadly defined, we license evolutionary explanations. Outstanding examples include an explanation of why the Moon always faces us due to the long-term action of tidal friction, and of the structure of gaps in the asteroid belt due to resonance with planetary periods. Also from considerations internal to the program of analysis and synthesis, we motivate the search for emergent properties of complex systems that Philip Anderson has advocated under the rubric “More is different.” For these emergent properties can form the elements of robust descriptions—they transcend the otherwise incapacitating sensitivity to initial conditions.

In constructing explanations based on anthropics, randomness, and dynamical evolution, we must use intermediate models incorporating many things that can’t be calculated. Such necessary concessions to reality compromise the formal purity of the ideal of understanding the world by analysis and synthesis, but in compensation, they allow its spirit much wider scope.

Frank Wilczekis the Herman Feshbach Professor of Physics at the Massachusetts Institute of Technology in Cambridge.