Wanda Díaz-Merced listens. “When I get the data, I convert it to sounds. I can listen for harmonics, melodies, relative high- and low-frequency ranges,” she says. With NASA’s Swift satellite, for example, Díaz-Merced “was able to hear [previously overlooked] very low frequencies from gamma-ray bursts. I had been listening to the time series and said to the physicists in charge, ‘Let’s listen to the power spectra.’ If I hear descending tones, it could tell me that the spectral index [a measure of the dependence of radiative flux density on frequency] is changing—if it gets more negative, the plasma is getting more turbulent.”

Díaz-Merced lost her sight to complications of juvenile diabetes. At the time, she was an undergraduate studying astronomy in her native Puerto Rico. Eventually she discovered Radio JOVE, a NASA outreach program through which people build or remotely use radio telescopes and share observations via the internet. “I began listening to their teleconferences,” she says. “It was the only thing I could do to keep in contact with science. I immediately bought a wee radio telescope. You have no idea how much hope it brought me.”

Today, Díaz-Merced is wrapping up her PhD at the University of Glasgow. She works on software that plots one-dimensional data and converts it to sound, and she studies perceptions of audio displays to test how sound can be useful to sighted scientists. Katrien Kolenberg of the Harvard–Smithsonian Center for Astrophysics has used Díaz-Merced’s sonification algorithms with her own asteroseismic data. “Sound really helped me to decide whether a signal was real or just noise when I was lost with the visual data,” says Kolenberg. With databases of millions of stars or other objects, she adds, “it may be a time saver to use sound in addition to visual data to classify objects.”

Using sound for data analysis is catching on, and among those working in the field—scientists, psychologists, musicians, innovators for the visually impaired, and others—there’s a feeling of untapped potential. (Links with more information and examples of sonification are available with the online version of this story.) Data parameters can be mapped to pitch, timbre, tempo, loudness, stereo panning, reverberation, and recorded sounds of musical instruments, among other things. Mark Ballora, a professor of music technology at the Pennsylvania State University, says, “Ears are really good at hearing dynamic changes. They can detect small changes in pitch. No one has designed that killer app that makes people say, ‘I must have sound as part of my tool kit.’ But I feel it must be out there.”

“The best pattern recognition system that we know of is our auditory system,” says Bruce Walker, a professor in the schools of psychology and interactive computing whose group at Georgia Tech is called the sonification lab. “We know that with music, we can convey melody, tension, expectancy. There is a lot of similarity between trends and melodies. We extract a huge amount of information, so there are lots of reasons to expect sound to be a good medium.” Moreover, he says, “There is visual data that is too noisy, but when you listen, something emerges.”

Donald Gurnett of the University of Iowa has sonified data from spacecraft for decades. “When Voyager 1 flew by [Jupiter’s moon] Io in 1979,” he says, “we detected whistlers”—low-frequency radio waves. “That was first detected by hearing. Your ears are amazing at picking out fine signals. In frequency–time spectra, you can choose the resolution when you process the data. If you choose the wrong resolution, you may not detect anything. You have to match what you are processing to the time resolution. Your ear does that automatically.”

Sound can be useful for keeping tabs on multiple processes. For example, Walker says his group has designed an auditory process monitor that allows a pilot to remotely control several drones at once. “If you hear anything go wrong, you switch that one to the front” of your computer screen. Surgery is another example. Multiple sensors may simultaneously measure blood pressure, blood oxygen, temperature, heart beat, and other parameters. When something needs attention, an alarm sounds. The doctor may not be able to tell what needs attention, but he or she would know to check immediately for problems. Indeed, medicine was an early adopter—consider the stethoscope, invented in 1819.

An area in which sound has made strong inroads is in monitoring data acquisition over long durations. This is commonplace for such things as stock prices. Ballora has been hired by companies to use sound to monitor such things as web traffic for internet security purposes and to look for correlations of stock prices with certain words on Twitter.

A pure science example is the Laser Interferometer Gravitational-Wave Observatory (LIGO). Says Syracuse University’s Peter Saulson, “We need fancy instruments to turn ultra-small events into something noticeable. But once we have done that, it’s trivial to sonify. We hook the data stream to headphones.” Gravitational waves have yet to be detected, but LIGO scientists have learned from simulations to diagnose instrument problems and to recognize the sounds of various astrophysical events. A couple of years ago, part of the collaboration “salted our data with a fake signal to test that the collaboration hadn’t lost its edge in data analysis,” says Saulson. After the detector sent an alert, listening to the signal “convinced us it had the right properties before all of the detailed checks were done.” Adds LIGO scientist Neil Cornish of Montana State University, “Certainly, when I go on my shift in the LIGO control room, I will be listening. It would be really cool to hear a gravitational wave in real time. You would be the first person to hear it. You would get it before the processing software will have found it.”

Some seismologists use sound to aid in the recognition and analysis of events. Zhigang Peng of Georgia Tech speeds up seismographic data to bring it into the human auditory range, 20–20 000 Hz. Through listening to data from the Great East Japan Earthquake of March 2011, he says, “we realized there are more aftershocks than reported. They can be tiny and buried.” That discovery, he says, “opens the door to more research.” In related work, by analyzing sounds far from an earthquake epicenter, Peng says he can distinguish between local seismic events and aftershocks from the main temblor. “You can do it with sight,” he says. “But it’s easier with your ear. The filters are built into your head.”

Thomas Hermann, a physicist at Bielefeld University in Germany, specializes in developing methods for data sonification. He has applied sonification to neural networks, traffic patterns, and other things. It can be useful for nontemporal patterns, he notes. “We sonified fluorescence microscopy data, which enabled us to perceive multivariate properties of biological cells very quickly.” Applications range “from analysis to monitoring, from rapid scanning of large data sets to subtle direction of attention when coupled with visual interactive inspection,” says Hermann. “Rapid scanning draws attention to outliers in a distribution. And the high temporal resolution makes [sonification] ideal for real-time and online applications.” But sound is still a “neglected modality, and we simply waste bandwidth if we do not use it.”

Kelly Snook got interested in using sound for data analysis while working as a planetary scientist at NASA. “I noticed that a lot of discoveries were made by graduate students sitting there staring at screens of numbers as Mars data came down. I thought there must be a better way to represent data for exploration of large amounts of data when you don’t know exactly what you are looking for—comparing things, noticing patterns, making correlations.” Sifting through large data sets—for example, from planets and stars, particle physics events, ocean samples, DNA—requires making sense of a “huge amount of information all at once,” she says. “With Mars data, time of day, place in orbit, height in the atmosphere, aerosol loading of the atmosphere, how cloudy or dusty it is are just a few of the changing variables that come into play,” says Snook, who recently became studio manager and engineer for singer-songwriter Imogen Heap in London. “I think you can saturate your visual space.”

For centuries of science publications, “all we had was print,” notes Snook. “We should take advantage of what has opened up to us through the internet and our ability to compute things quickly.” A step in that direction is the use of Scalable Vector Graphics (SVG) in online publishing. The American Physical Society, for example, is exploring SVG thanks largely to John Gardner, a physicist who retired from Oregon State University after becoming blind. Physicists want SVG “because it makes the original data accessible,” says Gardner. The format also means that data would be available for sonification. Gardner says he became frustrated when he couldn’t see the graphs his students made. He went on to found ViewPlus, a company that makes technologies to aid the visually impaired (see PHYSICS TODAY, February 2012, page 24).

“We are just beginning to develop sonic paradigms equivalent to the visual ones that we have developed for hundreds of years. The sonic representations have not yet been exploited even at the basic level,” says Snook. “If you hit on the right manipulations to take advantage of the different senses, you have a broader space for optimization of data analysis. In real life we use both our eyes and ears. It’s not one versus the other.”

Sound does have pitfalls. Among them, says Snook, is that sound can be “annoying. People are forced to impose their own musical taste and ideas subjectively on data.” Sound is also fleeting, it can be hard to describe, and with no printout, it can be difficult to show results to others. Listening can require training, and it can take longer than looking, although that depends on the data. The type of data also needs to be taken into account. Says Walker, “It is hard to represent a spatial arrangement with sound. You need to use the appropriate display.”

In a TED talk in March 2011, Columbia University theoretical astrophysicist Janna Levin said, “I’d like to convince you that the universe has a soundtrack and that soundtrack is played on space itself, because space can wobble like a drum. . . . And while we’ve never heard the sounds from space, we really should.” In an interview, Levin pointed out that those sounds, like images from the Hubble Space Telescope, are “communications tools.”

As a communications tool, sound is already proving itself. Ballora, for example, is working with physics Nobel laureate George Smoot and former Grateful Dead drummer Mickey Hart on Rhythms of the Universe, a multimedia presentation that opens later this year in science museums. The idea, says Ballora, “is to get kids tapping their feet to the cosmic microwave background, helioseismology, the spectra of supernovae, solar winds. . . . I have been making sounds out of astronomical data sets and it’s been a blast.” An audience’s eyes “will glaze over if you talk about Fourier transforms,” says Cornish. “Rather than explain what a spectrogram is, if you play sound, they get a sense of the information encoded in the signals. Our brains have evolved to understand. It’s very powerful to explain things to the public, and even to other scientists.”

While working on her PhD in experimental particle physics at University College London, Lily Asquith was inspired by conversations with musician friends to sonify data from the Large Hadron Collider. The result, LHCsound (http://lhcsound.hep.ucl.ac.uk), launched in 2010, represents real and simulated data using synthesizers and compositional software. “Nothing we have done so far could be used in analysis,” says Asquith, now a postdoc at Argonne National Laboratory. “I do think it could bring insight, but that’s a long way off.”

“I do physics because I want to know what the universe is made of,” says Asquith. The LHCsound project “is putting me in touch with people who share this enthusiasm. Anything that shows what is happening at the LHC and is enjoyable and feeds back into art is also good for science. And everyone enjoys hearing and looking at beautiful things.”

The Sonification Handbook

Binary black holes with extreme mass ratios

Janna Levin: The sound the universe makes

TEDxPSU - Mark Ballora - Opening Your Ears to Data

LHCsound Sounds Library

Space Audio

Listen to solar storm activity in new sonification video

Listening to the sun on a loop

Sonification Sandbox

Helping the Blind to See Science and Math

Mark Whittle: Big Bang Acoustics - The CMB Sound Spectrum

Perimeter Institute for Theoretical Physics: Public Lectures - Songs of the Stars: the Real Music of the Spheres

Note: the above lecture, scheduled for 2 May will be posted here: Perimeter Institute for Theoretical Physics: View Past Public Lectures

Wanda Díaz-Merced works on xSonify, a computer program that converts data into sound.

Wanda Díaz-Merced works on xSonify, a computer program that converts data into sound.

Close modal

A small black hole spirals into a massive black hole in this simulation. At left, the frequencies have transiently become harmonics of each other. Such resonances are also obvious in the sonification of the gravitational waves created by the inspiraling; a movie simulation including sonification is available at http://www.tapir.caltech.edu/~sdrasco/animations.

A small black hole spirals into a massive black hole in this simulation. At left, the frequencies have transiently become harmonics of each other. Such resonances are also obvious in the sonification of the gravitational waves created by the inspiraling; a movie simulation including sonification is available at http://www.tapir.caltech.edu/~sdrasco/animations.

Close modal