Not long after I started work at Physics Today, some of my new colleagues were having a conversation about tennis. They asked me if I ever played; I did not. “Oh, but you’re tall,” one of them said, “so that would give you a leg up.”
“I’m also completely uncoordinated,” I replied. “And that gives me a leg down.”
Although I’ve never been one for athletic pursuits of any kind, I enjoyed reading the Quick Study in our February issue, by Jonah Botvinick-Greenhouse and Troy Shinbrot, on the dynamics of juggling. Back in my student days, I spent enough time around hobbyist jugglers to marvel at their abilities, and the Quick Study helps to elucidate why they’re so impressive. Humans can’t normally throw accurately enough to lob a ball high in the air and have it land on a hand-sized target. And our reaction time, at 200–250 ms, is far too slow to watch for each descending ball in real time.
How do jugglers defy those apparent physiological limits? Although the answer isn’t completely understood, Botvinick-Greenhouse and Shinbrot suggest that it has something to do with our (that is, all humans’) ability to implicitly compute and predict classical trajectories. Watching a ball traverse an earlier part of its parabolic arc gives a juggler enough information to tell where it’s going to land, in plenty of time to catch it.
On the other hand, the authors also note that some jugglers can juggle while blindfolded. So there’s clearly more to juggling skill than interpreting visual information.
It’s not just practiced athletes who can interact with the world on time scales shorter than 200–250 ms, and the physics of how we do so is not limited to simple classical mechanics. In 2018 I wrote a Search & Discovery story about an experiment on how we learn to manipulate complex, nonrigid objects. The researchers were motivated by a task that even klutzes like me can manage (usually) without difficulty: carrying a nearly full cup of coffee without spilling it. Like juggled balls, the sloshing coffee moves too fast for us to react to it in real time. But unlike with juggling, implicit prediction is of no use. The liquid effectively has infinitely many degrees of freedom, and not even a supercomputer can model its dynamics quickly and accurately enough.
The researchers—Dagmar Sternad and colleagues at Northeastern University—hypothesized that we can grapple with such unpredictable objects by unconsciously guiding them toward stable regions of their phase space, where trajectories with similar starting points draw closer together with time instead of drifting apart. To borrow the language of the butterfly effect: In an unstable region of phase space, a flap of a butterfly’s wings in Brazil might alter the course of a tornado in Texas; but in a stable region, it can’t.
It’s easy to see how regions of stability—if they exist, and if we can find them—can make a complex system easier to control. Perturbations such as stray currents of air have no lasting effect on the dynamics, so there’s no need to worry about them. But could coffee carriers the world over really be doing that kind of stability analysis whenever they pick up a cup, without knowing what dynamical stability or phase space even is?
The answer, it seems, is yes: The more Sternad’s test subjects interacted with a toy model of a coffee-filled cup, the more reliably they found the stable part of its phase space, and the more easily they kept it from spilling.
Just last month, at the University of Chicago, I learned about yet another physics-related aspect of how we interact with the world. I attended the American Physical Society’s Conference for Undergraduate Women in Physics, whose organizers had invited me to talk to the students about my career; as a bonus, I got to hear from some of the talented women researchers from the Chicago area and beyond. One of the speakers was Stephanie Palmer, a condensed-matter physicist turned theoretical neuroscientist, who works on an idea that I found astonishing: Not only is your brain smart enough to predict the future, but so are your eyes. (If you have an hour to spare and want to learn about her work in more detail, here is a video of a talk similar to the one she gave us.)
It takes a good 50–100 ms—a significant fraction of our 200–250 ms reaction time—just for an optical signal to register on the retina and reach the brain. If the retina faithfully reported a pixel-by-pixel account of the light it received, the brain would see the world as it existed 50–100 ms ago. Fast-moving objects would appear several feet behind where they really are.
For us, that lag might mean the difference between catching a ball and flailing in futility at thin air. For our distant ancestors—and for other animal species—it’s the difference between eating and being eaten.
So the retinal neurons do something extremely clever: They pull together all the information they have about an object’s trajectory, figure out where it’s going to be 50–100 ms in the future, and send that information to the brain as the visual signal. There are dynamic optical illusions that let you see the effect of that process on your own vision; Palmer showed one in her talk, and you can play with it yourself here.
Prediction is easy for an object moving in a straight line or a smooth, circular path. But real-world objects, especially potential predators or prey, aren’t always so accommodating. Still, the retinal cells do the best they can with the information they have—and they do a pretty good job. Palmer’s research involves making sense of the different prediction problems that the retina needs to solve and how they differ for different species. Some animals, for example, are more concerned with predicting positions, while others are biased toward predicting velocities.
Among humans, predictive ability is mostly independent of athletic ability; the difference between elite athletes and the rest of us is what they do with the visual information once they get it. Coordinated and uncoordinated alike, we’re all better at solving physics problems in our heads than we realize.