To a physicist, life looks like magic. Living things accomplish feats so dazzling, so enigmatic, that it’s easy to forget they are made of ordinary atoms. But if the secret of life is not the stuff of which living things are made, then what is it? What gives organisms that distinctive élan that sets them apart as remarkable and special? That was the question posed by Erwin Schrödinger in a famous series of lectures delivered in Dublin, Ireland, in 1943, and published the following year as an influential book titled What Is Life?1
Schrödinger was a giant of theoretical physics and one of the founders of quantum mechanics, the most successful scientific theory ever conceived, both in terms of applications and accuracy. For example, when applied to the electromagnetic field, it correctly predicts the anomalous magnetic moment of the electron to better than 10 significant figures. Almost at a stroke, quantum mechanics explained the nature of inanimate matter, from subatomic particles, through atoms and molecules, to stars. But, frustratingly, it didn’t explain living matter. And despite spectacular advances in biology in the intervening decades, life remains a mystery. Nobody can say for sure what it is or how it began.2
Asked whether physics can explain life, most physicists would answer yes. The more pertinent question, however, is whether known physics is up to the job, or whether something fundamentally new is required. In the 1930s many of the architects of quantum mechanics—most notably Niels Bohr, Eugene Wigner, and Werner Heisenberg—had a hunch that there is indeed something new and different in the physics of living matter. Schrödinger was undecided, but open to the possibility. “One must be prepared to find a new kind of physical law prevailing in it,” he conjectured.1 But he didn’t say what that might be.
Those questions go beyond mere academic interest. A central goal of astrobiology is to seek traces of life beyond Earth, but without a definition of life it is hard to know precisely what to look for. For example, NASA is planning a mission to fly through the plume of material spewing from fissures in the icy crust of Enceladus, a moon of Saturn known to contain organic molecules (see the article by John Spencer, Physics Today, November 2011, page 38). What would convince a skeptic that the material includes life, or the detritus of once-living organisms, as opposed to some form of pre-life? Unlike the measurement of, say, a magnetic field, scientists lack any sort of life meter that can quantify the progress of a chemical mixture toward known life—still less an alien form of life.
Most astrobiologists focus on signatures of life as we know it. For example, NASA’s Viking mission to Mars in the 1970s sought signs of carbon metabolism using a broth of nutrients palatable to terrestrial organisms. Another much-discussed biosignature is homochirality—the presence of only one enantiomer. Although the laws of physics are indifferent to left–right inversion, known life uses left-handed amino acids and right-handed sugars. But inorganic soil chemistry can mimic metabolism, and homochirality can be generated by iterated chemical cycles without life being involved, so those putative biosignatures are not definitive.
Farther afield, the problem of identifying life is doubly hard. Astrobiologists have pinned their hopes on detecting oxygen in the atmospheres of extrasolar planets, but again, atmospheric oxygen is not an unambiguous signature of photosynthesis, because nonbiological processes can also create oxygenated atmospheres. What we lack is any general definition of “living” independent of the biochemical substrate in which life is instantiated. Are there any deep, universal principles that would manifest identifiable biosignatures, even of life as we don’t know it?
The two cultures
The gulf between physics and biology is more than a matter of complexity; a fundamental difference in conceptual framework exists. Physicists study life using concepts such as energy, entropy, molecular forces, and reaction rates. Biologists offer a very different narrative, with terms such as signals, codes, transcription, and translation—the language of information. A striking illustration of that view is the amazing new CRISPR technology that allows scientists to edit the codebook of life (see the article by Giulia Palermo, Clarisse G. Ricci, and J. Andrew McCammon, Physics Today, April 2019, page 30). The burgeoning field of biophysics seeks to bridge the conceptual gulf by, for example, modeling patterns of information flow and storage in various biological control networks.
Life is invested in information storage and processing at all levels, not just in DNA. Genes—DNA sequences that serve as encrypted instruction sets—can switch other genes on or off using chemical messengers, and they often form complex networks. Those chemical circuits resemble electronic or computing components, sometimes constituting modules or gates that enact logical operations.3
At the cellular level, a variety of physical mechanisms permit signaling and can lead to cooperative behavior. Slime molds, like the one shown in figure 1, provide a striking example. They are aggregations of single cells that can self-organize into striking shapes and sometimes behave coherently as if they were a single organism. Likewise, social insects such as ants and bees exchange complex information and engage in collective decision making (see the Quick Study by Orit Peleg, Physics Today, April 2019, page 66). And human brains are information processing systems of staggering complexity.
Slime mold. Sometimes collections of free, single cells form cooperatives and behave like a single organism with a common agenda. (Courtesy of Audrey Dussutour, CNRS.)
Slime mold. Sometimes collections of free, single cells form cooperatives and behave like a single organism with a common agenda. (Courtesy of Audrey Dussutour, CNRS.)
The informational basis of life has led some scientists to pronounce the informal dictum, Life = Matter + Information. For that linking equation to acquire real explanatory and predictive power, however, a formal theoretical framework is necessary that couples information to matter. The first hint of such a link came in 1867. In a letter to a friend, Scottish physicist James Clerk Maxwell imagined a tiny being that could perceive individual molecules in a box of gas as they rushed around. By manipulating a screen and shutter, the demon, as the diminutive being soon came to be known, could direct all the fast molecules to the left of the box and the slow ones to the right, as illustrated in the box on page 37.
The figure here shows a box of gas divided into two chambers by a screen with a small aperture through which molecules (green) may pass one by one. The aperture is blocked by a shutter. It’s controlled by the 1867 brainchild of James Clerk Maxwell: a tiny demon who observes the randomly moving molecules and can open and close the shutter to allow fast molecules to travel from the right-hand chamber to the left, and slow molecules to travel in the opposite direction. The mechanism could then be used to convert disorganized molecular motion into directed mechanical motion. The demon lay like an inconvenient truth at the heart of physics for decades, mostly dismissed as a mere theoretical puzzle. A century after Maxwell envisaged the thought experiment, a real demon was made in a laboratory in Edinburgh, the city of Maxwell’s birth. The experiment consisted of a molecular ring that could slide back and forth on a rod with stoppers at the end. In the middle of the rod sat another molecule that could exist in two conformations—one that blocks the ring and one that allows it to pass. The molecule thus serves as a gate, akin to Maxwell’s original conception of a movable shutter.17
Following that lead, a cottage industry in demonic devices emerged, including an information-powered refrigerator built by Jukka Pekola’s nanoscience group at Finland’s Aalto University and Dmitri Averin of Stony Brook University.18 In the refrigerator, the role of the gas molecule is played by a single electron confined to a two-sided nanoscale box that is coupled to a heat bath. The cooling cycle exploits the existence of two degenerate box states for a certain electron energy. The cycle begins with the electron in a definite, nondegenerate state. An external electric field raises the electron energy to the degenerate level, where the electron can reside with equal probability in either of the two states.
That introduction of uncertainty represents an increase in the entropy of the electron and a corresponding decrease in the entropy, and thus the temperature, of the bath. At this point the demon—played by another single-electron box coupled to the first—detects which of the two states the electron is in and autonomously feeds the information to the driving field, which uses it to rapidly return the electron to its initial nondegenerate state and complete the cooling cycle. The researchers found that the creation of one bit of information per cycle—which state the electron is in—could extract heat from the bath with an average efficiency of about 75%. Maxwell was right: Information really can serve as a type of fuel.
Because molecular speed is a measure of temperature, the demon would, in effect, use information about molecules to create a heat gradient inside the box. An engineer could then tap that gradient to extract energy and perform useful work. On the face of it, Maxwell had designed a perpetual motion machine, powered by pure information, in defiance of the second law of thermodynamics (see the article by Eric Lutz and Sergio Ciliberto, Physics Today, September 2015, page 30).
To resolve the paradox, information must be quantified and formally incorporated into the laws of thermodynamics. The basis for modern information theory4 was laid down by Claude Shannon in the late 1940s. Shannon defined information as reduction in uncertainty—for example, by inspecting the outcome of a coin toss. The familiar binary digit, or bit, is the information gained by determining heads or tails from flipping a coin. The synthesis of Shannon’s information theory and thermodynamics led to the identification of information as negative entropy. Any information acquired by the demon to gain a thermodynamic advantage must therefore be paid for by a rise in entropy at some stage—for example, when the demon’s memory store is erased and reset so the demon can repeat the cycle.
Maxwell conceived of his demon as a thought experiment, but advances in nanotechnology now permit experimental realizations of the basic idea (see the box). Yet life has been making and using varieties of demons for billions of years. Our bodies are replete with them.5 Molecular machines that copy DNA, transport cargo along fibers, or pump protons through cell membranes operate very close to the ideal thermodynamic limit. They play the margins of the second law to gain an energy advantage.6,7 The human brain uses in its wiring a type of demon—voltage-gated ion channels—to propagate electrical signals. Those ion channels enable the brain to run on the energy equivalent of a dim light bulb even though it has the power of a megawatt supercomputer.8
The contextual nature of biological information
Demonics is merely the tip of life’s informational iceberg. Biological information goes far beyond optimizing the energy budget; it often acts as a type of manager. Consider the way an embryo (figure 2) develops from a fertilized egg. It’s supervised at every stage by information networks finely tuned to a multitude of physical and chemical processes, all arranged so that the complex final form emerges with the right architecture and morphology.
A human embryo, 38 mm long, 8–9 weeks. (Adapted from photo by Anatomist90, Wikimedia Commons, CC BY-SA 3.0.)
A human embryo, 38 mm long, 8–9 weeks. (Adapted from photo by Anatomist90, Wikimedia Commons, CC BY-SA 3.0.)
Attempts to model embryogenesis using information flow in gene regulatory networks have been remarkably successful. Eric Davidson and his coworkers at Caltech worked out the entire wiring diagram, chemically speaking, for the gene network that regulates the sea urchin’s early-stage development. By tracking the information flow, the group programmed a computer to simulate the network dynamics step by step. At each stage they compared the computer model of the state of the circuit with the observed stage of the sea urchin’s development and obtained an impressive match. The researchers also considered the effects of chemically silencing specific genes in the computer model to predict what would happen to the mutant embryo; again, their modeling matched the experimental observations.9
A group led by Thomas Gregor and William Bialek at Princeton University has been investigating the early stages of fruit fly development—in particular, how distinctive morphological features first appear. During development, cells need to know their location relative to other cells in three-dimensional space. How do they obtain that positional information? It has long been known that cells exhibit a type of GPS based on chemical gradients that are, in turn, regulated by the expression levels of specific genes. The Princeton group recently zeroed in on four so-called gap genes that lay the foundations for patterning the embryo by creating gaps, or bands, in the body plan. They found that cells were extracting optimal positional information from the gene expression levels by exploiting Bayesian probabilities, and thereby achieving an astonishing 1% accuracy. The researchers were able to apply a Bayesian optimization model to mutant strains and correctly predict their modified morphology too.10
Those analyses raise a crucial philosophical question that goes to the heart of the conceptual mismatch between physics and biology. Studies of gene regulatory networks and the application of Bayesian algorithms are currently treated as phenomenological models in which “information” is a convenient surrogate or label for generating a lifelike simulation of a real organism. But the lesson of Maxwell’s demon is that information is actually a physical quantity that can profoundly affect the way that matter behaves. Information, as defined by Shannon, is more than an informal parameter; it is a fundamental physical variable that has a defined place in the laws of thermodynamics.
Shannon stressed that his information theory dealt purely with the efficiency and capacity of information flow; it said nothing about the meaning of the information communicated. But in biology, meaning or context is critical. How might one capture mathematically that property of instructional or supervisory or contextual information? Here’s one approach: Molecular biology’s so-called central dogma—a term coined by Francis Crick a decade or so after he and James Watson deduced the double helix structure of DNA—is that information flows in one direction, from DNA to the machinery that makes proteins and thence to the organism. One might term that a “bottom-up” flow.
Today, information transfer in biology is known to be a two-way process, involving feedback loops and top-down information flow. (See the article by George Ellis, Physics Today, July 2005, page 49.) For example, if cells cultured to grow in a Petri dish get too crowded, they stop dividing, a phenomenon known as contact inhibition. And experiments with microbes on the International Space Station have shown that bacteria may express different genes in a zero-gravity environment than they do on Earth. Evidently, system-level physical forces affect gene expression operating at the molecular level.
The work of Michael Levin and his colleagues at Tufts University’s Allen Discovery Center provides an arresting example of top-down information flow. Levin’s group is exploring how system-wide electrical patterning can be as important as mechanical forces or chemical patterning in controlling the growth and morphology of some organisms. Healthy cells are electrically polarized: They maintain a potential difference of a few tens or hundreds of millivolts across the cell walls by pumping out ions. Cancer cells, by contrast, tend to be depolarized.
Levin’s group has found that in multicellular organisms, cell polarization patterns across tissues play a key role in growth and development, wound healing, and organ regeneration. By disrupting those electrical patterns chemically, the group can produce novel morphologies to order.11 A species of planaria flatworm provides a convenient experimental subject. If a normal worm is chopped in two, the head grows a new tail and the tail grows a new head, making two complete worms. But by modifying the electrical polarization state near the wound, one can make two-headed or two-tailed worms, as shown in figure 3. (See Physics Today, March 2013, page 16.)
This two-headed worm was created by manipulating electrical polarity. The worm reproduces other two-headed worms when bisected, as if it is a different species, even though it has the same DNA as normal one-headed worms. Somehow the information about the global body plan is passed on to the progeny epigenetically. (Adapted from T. Nogi et al., PLOS Negl. Trop. Dis. 3, e464, 2009, doi:10.1371/journal.pntd.0000464.)
This two-headed worm was created by manipulating electrical polarity. The worm reproduces other two-headed worms when bisected, as if it is a different species, even though it has the same DNA as normal one-headed worms. Somehow the information about the global body plan is passed on to the progeny epigenetically. (Adapted from T. Nogi et al., PLOS Negl. Trop. Dis. 3, e464, 2009, doi:10.1371/journal.pntd.0000464.)
Amazingly, if those monsters are in turn chopped in two, they do not revert to the normal phenotype. Rather, the two-headed worms make more two-headed worms, and likewise with two-tailed worms. Despite all having identical DNA, the worms look like different species. The system’s morphological information must be getting stored in a distributed way in the truncated tissue and guiding the appropriate regeneration at the gene level. But how does that happen? Does an encrypted electrical code operate alongside the genetic code?
The term epigenetics refers to the phenotype-determining factors, such as gross physical forces, that lie beyond the genes. Very little is known about the mechanisms of epigenetic information storage, processing, and propagation, but their role in biology is critical. To make progress, we need to discover how different types of informational patterns—electrical, chemical, and genetic—interact to produce a regulatory framework that manages the organization of living matter and translates it into specific phenotypes.
Thinking about the physics of living matter in informational terms rather than purely molecular terms is analogous to the difference between software and hardware in computing. Just as a full understanding of a particular computer application—PowerPoint, for example—requires a grasp of the principles of software engineering as much as the physics of computer circuitry, so life can only be understood when the principles of biological information dynamics are fully elucidated.
A new concept of dynamics
Since the time of Isaac Newton, a fundamental dualism has pervaded physics. Although physical states evolve with time, the underlying laws of physics are normally regarded as immutable. That assumption underlies Hamiltonian dynamics, trajectory integrability, and ergodicity. But immutable laws are a poor fit for biological systems, in which dynamical patterns of information couple to time-dependent chemical networks and where expressed information—for example, the switching on of genes—can depend on global or systemic physical forces as well as local chemical signaling.
Biological evolution, with its open-ended variety, novelty, and lack of predictability, also stands in stark contrast to the way that nonliving systems change over time. Yet biology is not chaos: Many examples of rules at work can be found. Take the universal genetic code. The nucleotide triplet CGT, for example, codes for the amino acid arginine. Although no known exceptions to that rule exist, it would be wrong to think of it as a law of nature—like the fixed law of gravity. Almost certainly, the CGT-to-arginine assignment emerged, millions of years ago, from some earlier and simpler rule. Biology is full of cases like that.
A more realistic description of change in biosystems would be the variation in the dynamical rules as a function of the state of a system.2,12 State-dependent dynamics opens up a rich landscape of novel behavior, but it is far from a formal mathematical theory. To appreciate what it might entail, consider the analogy to a game of chess. In standard chess, the system is closed and the rules are fixed. From the conventional initial state, chess players are free to explore a state that, while vast, is nevertheless constrained by immutable rules to be but a tiny subset of all possible configurations of pieces on the board. Although an enormous number of patterns are possible, an even greater number of patterns are not permitted—for example, having all bishops occupy squares of the same color.
Now imagine a modified game of chess in which the rules can change according to the overall state of play—a system-level, or top-down, criterion. To take a somewhat silly example, if white is winning, then black might be permitted to move pawns backward as well as forward. In that extended version of chess, the system is open, and states of play will arise that are simply impossible using the fixed rules of standard chess. That imaginary game is reminiscent of biology, in which organisms are also open systems, able to accomplish things that are seemingly impossible for nonliving systems.
To explore the consequences of state-dependent dynamics in a simple model that captures top-down information flow, my research group at Arizona State University has used a modification of a 1D cellular automaton (CA). A standard CA is a row of cells—squares or pixels—that are either empty or filled (white and black, respectively, for example); a fixed rule is then used to update the state of each cell according to the existing state and that of its nearest neighbors. The system has 256 possible update rules.13
To play the CA game, one picks an initial cell pattern—conveniently represented as a sequence of bits, either 0 or 1—and then applies the chosen update rule repeatedly to evolve the system. Many update rules lead to dull outcomes, but a few produce elaborate patterns of evolving complexity. To implement a modified, state-dependent CA, my colleagues Alyssa Adams and Sara Walker computationally coupled two standard CAs. One represented the organism; the other, the environment.14
Then the two researchers allowed the update rule for the organism to change at each iteration. To determine which of the 256 rules to apply at any given step, they bundled the organism CA cells into adjacent triplets—000, 010, 110, and so forth—and compared the relative frequencies of each triplet with the same patterns in the environment CAs. Such an arrangement changes the update rule as a function of both the state of the organism, making it self-referential, and the state of the environment, making an open system.
Adams and Walker ran thousands of case studies on a computer to look for interesting patterns. They wanted to identify evolutionary behavior that is both open-ended—the organism does not soon cycle back to its starting state—and innovative. In this context, innovation means that the observed sequence of organism states could never occur in any of the 256 possible fixed-rule CAs from any starting state. It’s analogous to having four bishops end up on the same color squares in the modified game of chess. Although such open, innovative behavior turned out to be rare, some clear-cut examples emerged. It took a lot of computing time, but Adams and Walker discovered enough to be convinced that even in their simple model, state-dependent dynamics provide novel pathways to complexity and variety. Their work illustrates that merely processing the bits of information isn’t sufficient. To capture the full richness of biology, the information-processing rules themselves must evolve.
Life on the quantum edge
If biology deploys new physics, such as state-dependent dynamical rules, then at what point between simple molecules and living cells does it emerge? CA models may be instructive, but they are cartoons, not physics; they tell us nothing about where to look for new emergent phenomena. As it happens, standard physics already contains a familiar example of state-dependent dynamics: quantum mechanics.
Left in isolation, a pure quantum state described by a coherent wavefunction evolves predictably according to a well-understood mathematical prescription known as unitary evolution. But when a measurement is made, the state changes abruptly—a phenomenon often called the collapse of the wavefunction. In an ideal measurement, the jump projects the system into one possible eigenstate corresponding to the observable being measured. For that step, the unitary evolution rule is replaced by the Born rule, which predicts the relative probabilities of the measurement outcomes and introduces into quantum mechanics the element of indeterminism or uncertainty. That marks the transition from the quantum to the classical domain. Could quantum mechanics therefore point us to what makes life tick?
In his famous Dublin lectures, Schrödinger appealed to quantum mechanics to explain the stability of genetic-information storage. Before Crick and Watson had elucidated the structure of DNA, Schrödinger deduced that the information must be stored at the molecular level in what he termed “an aperiodic crystal,” a perceptive description of what nucleic acid polymers turned out to be. Left open, though, was the possibility that quantum phenomena might play a more pervasive role in living organisms.
In the intervening decades, a general assumption prevailed that in the warm, noisy environment of living matter, quantum phenomena would be smothered and classical ball-and-stick chemistry would suffice to explain life. In the past decade or so, however, interest has grown in the possibility that non-trivial quantum phenomena, such as superposition, entanglement, and tunneling, might be important for life after all. Although considerable skepticism remains, the new field of quantum biology is now under intensive investigation.15 Research has focused on topics as diverse as coherent energy transport in photosynthesis, the avian magnetic compass, and the olfactory response of flies.
Investigating the quantum properties of living matter on the nanoscale presents significant challenges. Systems that are critical to the operation of life may involve few degrees of freedom, are far from thermodynamic equilibrium, and are strongly coupled to their thermal environment. But it is here, in the field of nonequilibrium quantum statistical mechanics, that the emergence of new physics might be expected.
One set of experiments of possible relevance is the measurement of electron conductance through organic molecules. Recently, Gábor Vattay and colleagues have claimed that many biologically important molecules, such as sucrose and vitamin D3, have unique electron-conductance properties associated with the critical transition point between an insulator and a disordered metal conductor. Vattay and colleagues wrote, “The findings point to the existence of a universal mechanism of charge transport in living matter.”16 While their findings fall short of showing that quantum weirdness explains life, they do hint that the realm of quantum-tuned large molecules is where one might spot the emergence of the new physics that Schrödinger and his contemporaries suspected.
Clash of ideas
Theoretical physicist John Archibald Wheeler used to say that major progress in science stems more from the clash of ideas than from the steady accumulation of facts. Biophysics lies at the intersection of two great domains of science: the physical sciences and the life sciences. Each domain comes with its own vocabulary, but also with its own distinctive conceptual framework, the former being rooted in mechanical concepts, the latter in informational concepts. The ensuing clash presages a new frontier of science in which information, now understood formally as a physical quantity—or rather a set of quantities—occupies a central role and thereby serves to unify physics and biology.2
The huge advances in molecular biology of the past few decades may be largely attributed to the application of mechanical concepts to biosystems—that is, to physics infiltrating biology. Curiously, the reverse is now happening. Many physicists, particularly those working on foundational questions in quantum mechanics, advocate placing information at the heart of physics, while others conjecture that new physics lurks in the remarkable and baffling world of biological organisms. Biology is shaping up to be the next great frontier of physics.
References
Paul Davies is a Regents’ Professor in the physics department at Arizona State University in Tempe and the director of the university’s Beyond Center for Fundamental Concepts in Science.