
With the popularity of music-synthesis programs such as Audacity, GarageBand, and Logic, it’s never been easier for people to compose their own music. As a result, many students are now coming into college with a fundamental understanding of acoustic signal processing. They don’t realize what a powerful tool they are using, particularly for studying physics. One of my goals as an instructor of both physics and music synthesis has been to show students that the methods they are using to produce songs can also help them understand atoms and a variety of natural phenomena.
Through a series of learning exercises I developed called Atom Music, college students can explore atomic structure through music synthesis. Using relatively simple software, they create their own “atom songs” by correlating the colors that appear in atomic spectra with audible tones. By looking at the connections between atomic spectra and musical structure, students can identify notes from a specific atom’s spectral scale and use those notes to create songs unique to that element. I’ve found that even nonscience majors respond well to the incorporation of music into physics and become immersed in various projects, some of which have ended up as professional recordings.
Spectral symphony
Atoms of a particular element vibrate at specific frequencies unique to that element. The color we see is the combination of all the emitted visible wavelengths entering our eyes at the same time. The color can then be expressed as a Fourier transform, in which the brightness of each spectral line indicates the amplitude of the corresponding component wavelength. We can think of the lines as a sort of recipe for the color: The wavelengths are the ingredients, and the amplitudes are the amounts.
Just as any atom can be identified by its spectrum, any sound can be identified by its Fourier transform. The variations between transforms are what determine our perception that a note was produced by, say, a French horn or a bassoon. Although the harmonics produced are similar, the relative amplitudes of each harmonic differ from instrument to instrument.
There is a close correlation between the way atoms produce their signature colors and the way complex audible tones are formed. The octave, for example, arises from combining two audible sinusoids with a frequency ratio of 2:1. That was first observed by Pythagoras, who organized a musical-scale system based on those harmonics (see the article by George Gibson and Ian Johnston, Physics Today, January 2002, page 42). The same harmonics appear in the classic particle-in-a-box description of hydrogen.
Because audible tones are built up from the same mathematical relations that describe atomic spectra, we can create a musical scale for atoms. Then we can exploit a music-synthesis program, such as GarageBand, to generate sounds that correspond to particular component frequencies.
To start, we need to identify the range of frequencies of the emitted photons. For example, we can choose to limit the range to that of visible photons. The corresponding frequency scale becomes 4.3 × 1014 Hz (red)–7.5 × 1014 Hz (blue).
Standard written music is actually a semilogarithmic graph of frequency versus time. The standard musical staff is based on the most commonly used notes, up to a frequency of about 700 Hz. The scale below superimposes the visible spectrum and the most commonly used audible tones. The audible scale is extended slightly below and above the standard musical staff, from 20 Hz (the minimum threshold of frequencies audible to humans) to 1046.5 Hz (the first C note above standard written music).

Each of the spectral lines represents a single sinusoid that, when combined with others, can form complex tones. Just as with keys on a piano, the keys we hit and the tempo at which we hit them can produce myriad songs. One does not need to know a B4 from an A6 to create atom songs. The programs require only an input frequency to produce a tone. The traditional rules of music are gone. The brighter lines will be the dominant notes of the scale, while the lesser lines add complexity. It is easy to adjust the amplitude of each input frequency on any of the available programs.
Bringing atom songs to the classroom
I began using music synthesis in teaching as a way to demonstrate to college students the universality of physics. My introductory musical acoustics class consists primarily of nonscience majors who are fulfilling their science requirement and are also interested in music. The lab activities are set up such that students build up the knowledge and skills necessary to create their own atom scales and, toward the end of the course, songs.

In one lab activity, students record specific waveforms produced by a speaker connected to a function generator. They analyze the recordings first by ear and then visually from waveforms and Fourier transforms. We then look at various spectrum gas tubes and observe the spectral lines produced for each element. Finally, we compare those spectral lines with what we see and measure on the recordings.
For another lab, students map out musical scales on a sonometer, an acoustical resonator that uses a piano string to generate sounds. A movable bridge allows students to produce any frequency they want by measuring the distance from the end of the string to the bridge location. To familiarize themselves with the instrument, students first explore a standard scale, so that they can identify where to place the bridge to produce the desired frequencies. With that knowledge, students can map out atomic scales by labeling the sonometer with both frets and a meterstick.
Other popular activities include looking for audible similarities and differences between groups of elements on the periodic table and exploring how molecules are built up by combining individual atom songs to create a molecular “band.”
Student and teacher compositions
My upper-level course for interdisciplinary majors in music synthesis requires students to create minute-long compositions that are produced purely from mathematical and physical considerations. The final assignment requires that they use only granular synthesis. At its most basic, granular synthesis creates sound by stringing together tiny bits, or grains, of sound—essentially, quanta of sound. Each grain exists on a scale of milliseconds, so brief that the human ear barely perceives it.
Though there are different ways to create the sound grains, this particular assignment requires students to create an aural visualization of atoms. Each part of the atom is created on separate audio tracks so that the atomic properties can be controlled. The students start with a core sound that represents the nucleus and then surround it with a “cloud” of low-amplitude white noise. Within the cloud they embed microsounds, which represent the photon emissions caused by transitions between energy levels. The sound emissions are short-lived, often giving the overall sound a crackling tone that can be likened to the tiny flashes of photons.
Several student projects have resulted in professional pieces. For example, “It Saves You” began as a student project and blossomed into a song that appeared on a commercial album.
“Xenakis,” also produced using only granular synthesis methods, was created as part of a final thesis in music technology and signal processing. The project resulted in five recordings, each representing a different synthesis technique. The compilation was choreographed and performed publicly on several occasions.
Finally, I have used atom music as part of a project called Adventures in Atomville. My collaborators and I wanted to inspire younger children to learn about the atomic world the way they learned about wizardry and magic from Harry Potter: by creating an imaginary world through storytelling. As part of that world, atom characters create their own music and have traits specific to their atomic identity. Their coloring mimics their spectra, as does their choice of music and the instruments they play.

The atoms can play any type of music they want, but their placement on the periodic table determines their musical preferences. Typically, metals prefer rock and roll. Their partiality varies from soft rock to hard rock to heavy metal, depending on the type of metal. Heavy elements tend to like electronic dance because of all the heavy bass. Reactive nonmetals prefer jazz: They vibrate to their own beats, and though they do interact and bond with others, they also like to shine on their own. Noble gases are loners and only play classical.
“Salt Water,” which has been performed live during several of my public talks, is a jazzy tune that encompasses the contributions of various members of an atom band. On drums we have two Hydrogens, because one is never enough. Oxygen is playing flute. Her friend Sodium wants to play piano, but she can play only two notes, and they are only a couple of tones apart. So Sodium invites her friend Chlorine, who plays the melody line while Sodium keeps the beat. It turns out that all Borons play bass. And everybody knows, bass players will join in with almost any band they find.
Jill Linz is a senior physics instructor at Skidmore College in Saratoga Springs, New York.