Skip to Main Content
Skip Nav Destination
Leaders in artificial neural network development share 2024 Nobel Prize in Physics

Leaders in artificial neural network development share 2024 Nobel Prize in Physics

8 October 2024

With inspiration from neuroscience and the physics of atomic spins, John Hopfield and Geoffrey Hinton created the tools that underpin AI technology.

John Hopfield and Geoffrey Hinton
John Hopfield (left) and Geoffrey Hinton. Credits: Princeton University, Lewis-Sigler Institute, courtesy of the AIP Emilio Segrè Visual Archives, Gallery of Member Society Presidents; Johnny Guatto, University of Toronto

Updated at 6:53pm EDT

John Hopfield of Princeton University and Geoffrey Hinton of the University of Toronto are the recipients of the 2024 Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks,” the Royal Swedish Academy of Sciences announced on Tuesday. Inspired by the structure of the brain, the laureates drew on statistical mechanics to develop the pivotal tools that underpin the AI technology in widespread use today.

Psychologist Donald Hebb proposed in the 1940s that the brain learns through the reinforcement of connections between neurons. In making the award, the Nobel committee cited three successive research developments led by the laureates that transformed Hebb’s initial description into the groundwork for computer-based deep learning.

In a 1982 paper, Hopfield described a network, now known as the Hopfield network, that is composed of nodes joined by connections with different strengths. The strengths can be adjusted, or “trained,” to recognize patterns, so that when an image or input with noisy or missing data is fed into the network, it can iterate through possibilities until it finds the most probable pattern. The mathematics that governs the network is the same as what’s used to describe spin systems in magnetic materials.

“These are energy-based models,” says Graham Taylor, a computer scientist at the University of Guelph and a former PhD student of Hinton’s. “They settle to a low-energy state, much like physical systems do. That forms a link between machine learning and physics.”

Between 1982 and 1985, Hinton and colleagues extended the functionality of the Hopfield network architecture by incorporating concepts from statistical physics. Similar to the way that individual molecules in a gas cloud can change while the system retains its collective properties, individual components in those networks evolve while the network maintains its stored patterns. Instead of recognizing memorized patterns like the Hopfield network, their “Boltzmann machine” models probabilistic distributions of patterns. That was accomplished with the addition of hidden nodes—ones that do not directly represent the information fed into the system—which enabled the Boltzmann machine to learn more complex relationships and complete classification tasks that had not been possible without them.

The final advance that would facilitate the immense growth of machine learning over the past two decades was codeveloped by Hinton in 2002: an algorithm, known as contrastive divergence, that efficiently trains a modified version of the Boltzmann machine to conduct more efficient probabilistic training of deep, multilayered networks. “He basically revived the field under the name deep learning,” says Christian Igel, a computer scientist at the University of Copenhagen.

Taylor recalls that Hopfield’s and Hinton’s perseverance through periods of so-called AI winters like the 1990s, when interest in and funding for research were sparse, was crucial for the advancement of AI technology from those early conceptual developments to the large language models in popular use today. “In the last few years, everybody has been touched by AI in some way.” says Taylor.

The Nobel committee cited the many ways that AI has enhanced scientific research, such as data processing that enabled the discovery of the Higgs particle, better detection of gravitational waves, and calculations of protein molecular structure.

Clara Wanjura at the Max Planck Institute for the Science of Light points out that physical models, which inspired the laureates’ work, are also the basis for the next frontier in AI: reducing energy consumption (see Physics Today, April 2024, page 28). “Physical neural networks based on optics or electronics will be needed in the future to make machine learning more sustainable,” says Wanjura (see Physics Today, October 2024, page 12).

Hopfield, 91, is known for his work in both physics and neuroscience. He completed a PhD in physics at Cornell University in 1958 and has held faculty positions at the University of California, Berkeley, and at Caltech.

Hinton, 76, completed a PhD in artificial intelligence from the University of Edinburgh in 1978 and has served as a faculty member at Carnegie Mellon University and the University of Toronto. From 2004 until 2013, he was the director of the Neural Computation and Adaptive Perception research program funded by the Canadian Institute for Advanced Research. He worked part-time at Google, where he became a vice president and engineering fellow, for 10 years starting in 2013.

Recently, Hinton has made public statements about his concerns for the future of AI, and he added another during the Nobel announcement press conference. “We have no experience of what it’s like to have things smarter than us,” he said.

Hinton called into the announcement from what he described as a cheap hotel in California. “I’m flabbergasted,” he said, noting that he would probably have to cancel an MRI he had scheduled for later that day.

Selected articles in Physics Today

The Nobel Prize in the PT archives

Close Modal

or Create an Account

Close Modal
Close Modal