Rhythmic gymnasts train for years to create intricate swirls like the ones in figure 1 without tangling their long ribbons. But most able-bodied adults handle objects of similar complexity every day, often without even thinking. Walking with a cup of coffee without spilling it may not seem like such a daunting task, but just how we do it is far from understood.
With average reaction times to visual stimuli of around a quarter of a second, our nervous systems are far too slow to notice and compensate for each individual slosh of the hot liquid. Nor is it possible to perceive the coffee’s internal dynamics to anticipate the sloshes before they happen: A cup of liquid has so many degrees of freedom that not even a supercomputer can model them all in real time.
The science of human movement spans several disciplines—including psychology, biology, neuroscience, computer science, and engineering—and its practitioners take many different approaches to understanding our extraordinary dexterity in handling objects. One idea that’s been pursued by a few groups is that we overcome our nervous systems’ limitations by learning tricks that exploit the dynamical properties of our bodies and the world around us. In the phase space of a dynamical system—the multidimensional space that describes the position and velocity of each degree of freedom—regions can be classified as either dynamically stable or dynamically unstable, according to whether perturbations decay away or are amplified over time. As we gain life experience, the theory goes, we unconsciously learn to guide objects into regions of stability to make them easier to control.
That idea has met with some experimental confirmation, but so far only for periodic motions such as walking or bouncing a ball. Now Dagmar Sternad of Northeastern University in Boston, her postdoc Salah Bazzi, and their colleagues Neville Hogan and Julia Ebert have applied a different mathematical framework that’s applicable to transient motions such as moving an object from one point to another. In an experiment designed to simulate a simplified cup of sloshing coffee, they found that volunteer test subjects did indeed learn to deal with perturbations by maneuvering the system into a stable region of phase space.1
Practice makes perfect
The human body itself, of course, is a physical object, and the laws of physics influence all our movements. When you swing your arm at your side, its natural oscillation frequency is a function of its length and mass distribution, just as for any other pendulum. Physics also constrains our active movements: When you extend an arm while standing up, you need to subtly adjust your whole posture to keep from falling over.
That adjustment probably isn’t something you think about or even notice. The higher brain has better things to do, and it frees up resources for conscious thought by relegating certain motor processes to run in the background. Those processes can be quite complicated: Walking, for instance, involves a highly intricate sequence of coordinated movements, but most adults have no trouble walking while carrying on a conversation. “In general,” says Sternad, “it’s a challenging open problem to understand the interaction of the computational and physical levels of the brain and body.”
As anyone who plays a sport or musical instrument has likely experienced, one gets better at a physical task by practicing it. That improvement is often attributed to muscle memory: the recording of a sequence of movements to recall and rerun later. Clearly, it’s not really that simple, because the exact movements can vary, for example, based on sensory input. A significant component of skill development seems to be learning to take advantage of a system’s physical properties.
One of the many tasks Sternad’s group has previously studied is a version of the game of bar skittles, in which players try to knock over a target pin by throwing a ball that’s tethered to a post.2 The ball’s trajectory, and thus the player’s success, is determined by the position and velocity at the instant the ball is released; a continuum of positions and velocities correspond to successful throws. As test subjects practice, they get better at hitting the target, but not because they can precisely replicate a single position–velocity combination. Rather, the experiments revealed that they develop a throwing technique that gives them a tolerance for error: If the ball is released slightly early or late, it still hits the target.
In another prior study, subjects were instructed to bounce a ball rhythmically on a racket and keep the bounces as close as possible to a constant height.3 In the steady-state solution, the racket moves up and down in an approximately sinusoidal trajectory with the same period as the ball’s flight. From the perspective of dynamic stability, it’s ideal to hit the ball on the upward-decelerating portion of the racket trajectory: If one swing of the racket is inadvertently a little too strong, the ball flies a little too high and takes longer than usual to return to the racket. On the next swing, therefore, the racket strikes the ball slightly later—and with less speed—than usual, and the bounces gradually return to the desired height. Conversely, hitting the ball on the upward-accelerating portion of the swing means that perturbations are amplified over time.
Subjects practicing ball bouncing do tend to home in on the stable decelerating regime, the researchers found, but it’s not at all obvious that they would. An alternative theory of human movement, which postulates that we seek to minimize the energy we expend, predicts a different behavior: striking the ball at the moment of greatest upward racket velocity. Indeed, novice ball bouncers often take that approach. But because the solution is not dynamically stable, they need to constantly monitor the bounce height and readjust their swing at every step. For more practiced ball bouncers, reducing mental effort by exploiting dynamic stability wins out over reducing physical effort by striking the ball efficiently.
Stability analysis in the ball-bouncing experiment involves first finding a periodic trajectory—the ball bouncing repeatedly to the desired height—and then looking at the behavior of perturbations about that trajectory. Several methods exist for evaluating stability with respect to a periodic orbit or fixed point in phase space. Most human movements, however, are not periodic.
To extend stability analysis to transient movements, Sternad and colleagues turned to contraction theory, laid out 20 years ago by Winfried Lohmiller and Jean-Jacques Slotine at MIT.4 Rather than looking at perturbations about a reference trajectory, contraction theory considers pairs of closely spaced trajectories in phase space and how the distance between them evolves in time. A region of phase space is deemed stable if the trajectories in it converge exponentially. Although the theory is similar in essence to the widely used Lyaponov theory, it differs in one important respect. Lyaponov exponents are defined in the limit of infinitely long time series. Contraction theory, on the other hand, involves only calculations on finite time scales—a considerable advantage in the analysis of human movements.
As an experimental test system, Sternad and colleagues chose a shallow bowl containing a ball, whose motion resembles (but is much simpler than) the sloshing of coffee in a cup. Human volunteers were tasked with moving the bowl along a straight-line track as quickly as possible without losing the ball. Rather than a physical bowl and ball, the researchers used a virtual environment, shown in figure 2, so they could better record and manipulate the system’s dynamics. The robotic arm controlled the motion of the bowl, whose position was projected on a screen. The arm was programmed to provide haptic feedback: Subjects felt the same forces as they’d experience if they were actually moving the bowl with the rolling ball inside. In some trials, the researchers added a perturbative force: Halfway along the track was a bump, visible on the screen, that jerked the bowl either forward or backward.
The ball-in-bowl system is fully characterized by just a few variables—the position and velocity for each of the two degrees of freedom—so its phase space is relatively easy to grapple with. Contraction analysis revealed a major region of stability, in which the ball is positioned on the back half of the bowl and rolling forward. With practice, subjects improved both their speed and success rate at completing the task. And they did it by learning to manipulate the system into the stable region of phase space right before the bowl hit the bump.
Although more experiments are needed to see just how broadly applicable the idea is to other transient movements, the results could have significant implications for robotics. With the right design and programming, robots can mimic many of the movements of humans and other animals (see the article by Simon Sponberg, Physics Today, September 2017, page 34). But they still fall far short of humans in their ability to pick up and use tools, open doors, and otherwise meaningfully interact with the physical world—even though their reaction times are orders of magnitude faster than ours. Understanding the secrets of human movements could be the key to more dexterous robots.