Ever wonder how animated films such as The Incredibles get hair, clothing, water, plants, and other details to look so realistic? Or how, like the lion in The Chronicles of Narnia, animated characters are worked into live-action films? If not, the animators would most likely be pleased, since they don't want special effects to distract from the story. Behind the scenes, though, is a lot of artistry, computation, and physics.

Traditionally, animation was hand drawn. But among other skills, that requires “some of the same magical eye that the Renaissance painters had, to give the impression that it's realistically illuminated,” says Paul Debevec, a computer graphics researcher at the University of Southern California. Over the past decade or so, physically based simulations have been used increasingly to achieve more realistic lighting and motion. In films, though, physics is slave to expediency and art: Simplifications and shortcuts make the simulations faster and cheaper, and what the director wants trumps physical accuracy.

Other applications in which physically based animation plays a role include video games, which have the added challenge of requiring algorithms to run in real time; engineering tests of bridges, aircraft, cars, and the like; videos for training surgeons; and courtroom evidence. “Attorneys might mock up computer simulations showing what happened in an accident,” says James O'Brien, who simulates such things as explosions, fractures, and cloth in motion at the University of California, Berkeley. But relying on computer simulations could be dangerous, he adds. “They can be tweaked however you like. And when you see computer graphics, you believe it.”

In the movie 300, which came out earlier this year, several ships collide. Hulls splinter, masts break, sails tear, and the ships sink. The scene was simulated, although most of the film was not. Stephan Trojansky, who worked on 300 as visual effects supervisor for the Munich-based company ScanlineVFX, says the fluid simulation encompassed about “90 000 square meters of ocean with a resolution of approximately 8000 by 8000 by 2000 voxels—128 billion simulation elements. We probably created the highest fluid simulation detail ever used in visual effects.”

“For the fracturing and splintering of the ships,” he adds, “we developed splintering technology. You would usually use rigid-body systems, but wood doesn't break like a stone tower. It bends. To get realistic behavior, you have to take into account how the ship is nailed together. The physics involved is mainly equations that define where the material will break.”

Animations of both fluids and solids—and of facial expressions, clothing, and deformable objects, among other things—use various computational methods derived from discretizing continuous equations, Navier–Stokes in the case of fluids. The commonly used methods break the object being simulated into discrete elements (finite element method), fixed cells in space (finite difference method), or sample points (particle method). “The computational cost goes up with the number of grid cells or particles, but so does the realism,” says O'Brien. “The tradeoff between how good something looks versus cost starts to favor the particle method when you reduce the number to make it affordable, whereas the finite element and finite difference methods are favored where you can afford a more expensive computation.”

Mark Sagar of WETA Digital, a visual effects company in Wellington, New Zealand, specializes in simulating faces. One technique is motion capture, in which markers are placed on an actor's face, their positions are noted for different expressions, and the positions are then mapped onto an animated character. For example, says Sagar, “for King Kong we mapped the actor's expressions onto a gorilla.”

Simulating the face “can be treated as a kinematics or a dynamics problem,” Sagar says. “You interpret movement in terms of muscle—we approximate the detailed mechanical properties of live tissue and its layers and layers. You have motion data and start working out what the driving forces are. The equations are essentially F = ma.” Modeling realistic stretching of the skin requires a lot of finite elements—each a small patch of tissue—or else nodes connected by springs, he adds. “You compute and solve for forces at each point and then sum until you get a balanced equation. It's not sophisticated from an engineering standpoint but produces high-quality results.”

Realistic motion is often too complicated for animators to do by hand, says Michael Kass, a researcher at Pixar Animation Studios. “The results can be awful and very expensive.” He points to the original 1995 Toy Story and notes that “if you see a wrinkle in clothing, it's because an animator decided to put in a wrinkle at that point in time. After that we [at Pixar] decided to do a short film to try out a physically based clothing simulation.”

The movement of clothing is computed as a solution to partial differential equations, says Kass. “You start with individual threads. What are their basic properties? Then you consider the bulk properties when [they're] woven. The main physical effects are stretching, shearing, and bending. To a certain degree, you can take real cloth and get actual measurements.” Clothing isn't completely solved, he adds, “but it's now part of a standard bag of tricks. Our simulations have become accurate enough that we can design garments with commercially available pattern-making software and then have them move largely as a tailor would expect in our virtual simulations.”

Hair, Kass adds, “is in many ways easier than clothing because it's like individual threads. The difference is that clothing doesn't move like clothing unless the threads interact. In a real head of hair, the threads do interact, but you can get convincing motion without taking that into account.”

Illumination is another area in which physics plays a key role in animation. For a long time, says Cornell University's Steve Marschner, “rendering skin was hard. It would look waxy or too smooth.” The fix, he says, was to take into account that skin is translucent, which he and colleagues “figured out from looking at a different problem—rendering marble.”

As with simulations of fluids, cloth, rigid bodies, and so on, incorporating translucency to model skin involves old physics. “In some cases we have to create the models from the ground up. But sometimes we find somebody in another branch of physics who has solved a similar problem and we can leverage what they've done.” For skin translucency, “we were able to adapt a solution from medical physics, from a calculation of radiation distributions inside the skin that was used for laser therapy in skin diseases.”

Physically based audio simulations is an area that is heating up but so far is used more in video games than in the movie industry, says Nicolas Tsingos of INRIA, France's national institute for computer science and control near Nice. The sounds of objects vibrating or solids coming into contact with each other are easier to simulate than those of fluids, he adds. “If it's a fluid, you solve the Navier–Stokes equations and use the result to modulate input noise signals to get the final acoustic response. Sound and visuals are simulated hand in hand so you get a compelling cross-modal experience with synchronization—you get the boom at the same time as you see the explosion,” says Tsingos. “Computing physically based simulations of sound is a really good alternative to using prerecorded sound, but, especially for fluids, there is a long way to go to get the same degree of realism that people in computer graphics get for visuals.”

“One of the coolest things you see in a movie is when there is some sort of otherworldly beast or digital character that is sitting in the scene, roaming around, and it looks like it was really there,” says Debevec. “The only way you can do that is by understanding the physics of light transport, respecting how light works in the real world, and then using computers to try to make up the difference from what was really shot.”

For example, he says, in Narnia “they filmed a lot of it with the children dressed up in their knight costumes and left an empty space for the lion.” Then, to get the digital lion just right, “Rhythm and Hues Studios used radio-metrically calibrated cameras to measure the color and intensity of illumination from every direction in the scene.” The measurements, he adds, “are fed into algorithms that were originally developed in the physics community and have been adapted by the computer graphics community as a realistic way to simulate the way light bounces around in the scene. They also use the measurements to change the illumination in the scene that was really shot, so that shadows will appear where the character is blocking light.”

Similar methods are used for creating digital doubles—virtual stunt characters that fill in for live actors. For that, says Debevec, “film studios sometimes bring actors here to our institute, where we've built devices to measure how a person or object, or whatever you stick in [the device], reflects light coming from every possible direction” (see cover photo). The resulting data set, he says, can be used to simulate a virtual version of the person. “There are about 40 shots of a digital Alfred Molina playing Dr. Otto Octavius in Spider-Man 2. It looks like him, but it's an animated character. The reflection from the skin looks realistic, with its texture, translucency, and shine, since it's all based on measurements of the real actor.”

“We rarely simulate more than two indirect bounces of illumination, whereas in reality light just keeps bouncing around,” continues Debevec. “With no bounces, things look way too spartan and the shadows are too sharp. One bounce fills in perhaps three-quarters of the missing light, and with two bounces you're usually past 95%. That's good enough.” Another shortcut, he adds, is to focus just on the light rays that will end up at the eye. “We try to figure out the cheats you can make that give you images that look right.”

“There is a long tradition of cheating as much as possible,” says Marschner, “because setting up an exact simulation is either not possible or too expensive.” For example, he adds, to get illumination to look right, a light source might be placed in some nonphysical position, like inside a character's head. Adds Kass, “You can run a simulation backwards if you know how it should end up. Or you can add semi-invisible forces—pins or virtual glue to change the coefficient of friction locally. Animated characters don't object when you stick pins in them.”

We use physics to get realism, says Trojansky. “But I am a physics cheater. I use it as a base, but I am interested in the visual effect.” For fluid simulations, cheating might mean ignoring the compressibility or surface tension of the fluid, computing only surface behavior, or setting unrealistic boundary conditions to get the desired visual effect. Trojansky adds, “The Navier–Stokes equations are basic. They describe motion in our world, and there is no way to get around them. The question is how to solve and convert them into code that can create photorealistic results. If BMW does a crash-test simulation, they want an accurate simulation that gives real behavior, for safety. In films, we want to satisfy the director. So we write code that only fulfills the visual aspects and looks believable.”

Physically based animation is increasingly used in both live-action and animated films. The ocean and boats in the scene at left were simulated and then combined with a live-action shoot of people standing on a rock. Based on Frank Miller's graphic novel 300, the scene from the Warner Brothers film of the same name depicts part of a Persian fleet sinking in a storm off the coast of Greece. The realistic appearance of fur, water, and many other details in the animated film Ratatouille (top), released by Pixar Animation Studios earlier this year, were likewise achieved with physically based simulations.

Warner Brothers

Physically based animation is increasingly used in both live-action and animated films. The ocean and boats in the scene at left were simulated and then combined with a live-action shoot of people standing on a rock. Based on Frank Miller's graphic novel 300, the scene from the Warner Brothers film of the same name depicts part of a Persian fleet sinking in a storm off the coast of Greece. The realistic appearance of fur, water, and many other details in the animated film Ratatouille (top), released by Pixar Animation Studios earlier this year, were likewise achieved with physically based simulations.

Warner Brothers
Close modal

Digital doubles: This simulated image was created from measurements of the light reflected from the actual woman as she stood on a rotating stage with light shining on her from 6666 LEDs dotting a geodesic dome (see cover). The subject's position and lighting can be matched to a new digital environment. The technique is used to create digital doubles for stunt scenes in live-action films.

Paul Debevec, USC ICT

Digital doubles: This simulated image was created from measurements of the light reflected from the actual woman as she stood on a rotating stage with light shining on her from 6666 LEDs dotting a geodesic dome (see cover). The subject's position and lighting can be matched to a new digital environment. The technique is used to create digital doubles for stunt scenes in live-action films.

Paul Debevec, USC ICT
Close modal