For winter solstice, crowds usually gather at Stonehenge to watch the Sun set between the uprights of the tallest trilithon. That practice has been taking place since our ancient ancestors erected the sarsen stones about 2500 BC. But there is more to Stonehenge than observing its alignments to the sunrise and sunset at solstice. When people gather for rituals, they speak and make music—sounds that are amplified and altered by reflections from the stones. To fully understand Stonehenge, visitors need to look beyond its appearance, including the archaeological artifacts dug up at the site, to quantify how the monument’s acoustics altered its sounds and how the stones’ prehistoric geometry might have influenced what went on there.
Sunrise and sunset at solstice can still be experienced at the site. Although it is possible to get a sense of scale and be awed by the staggering feat of construction, listening to the current structure gives a misleading impression of what our ancestors heard in the late Neolithic period and early Bronze Age. The current thinking is that around 2200 BC the monument had 157 stones. That’s roughly double the number of stones and fragments that are left at the modern ruin, and many of those are now displaced or fallen over.
I got interested in ancient sites such as Stonehenge when I wrote about sounds of the past for my 2014 book Sonic Wonderland. While researching the topic, I realized that no one had investigated prehistoric stone circles by using acoustic scale models. That awareness prompted me to construct such a model on a 1:12 scale, as seen in the photo. Two research questions I and my collaborators—acoustician Bruno Fazenda (University of Salford) and archaeologist Susan Greaney (the nonprofit English Heritage)—wanted to address were, How is sound altered by the stones? and What does that reveal about where rituals might have taken place in the structure?
Making the model
Constructing a scale model is a major challenge, but the method provides a more accurate simulation of diffraction than can be achieved with current computer models. For large spaces, computer-modeling techniques are commonly based on ray tracing. And they are physically accurate only for high frequencies, at which the wavelength is smaller than the dimensions of the reflecting surfaces. The frequency range relevant to speech and music spans 100 Hz (3.4 m wavelength) to 5000 Hz (7 cm wavelength). With the narrowest stone 40 cm wide and the tallest 6.3 m high, geometric computer models are problematic for much of that bandwidth. It is possible to solve the wave equation to model diffraction and get more accurate results than ray-tracing methods, but the calculations would require too much time.
Acoustic scale modeling has been used in architectural acoustics since the 1930s. And even today, acoustic consultants make physical models when they are designing the most prestigious auditoriums. The technique is appealing because it can capture wave effects, such as interference and complex reflections from the stones. But for the approach to work, it is necessary to use a smaller wavelength. In our 1:12 scale model of Stonehenge, we used sound waves at 12 times their normal frequency because that preserves the relative size of the sound wavelength and stone dimensions.
People often ask about the materials in our model. Why aren’t the stones on grass, for example? We needed to match the materials’ reflection properties and take into account that measurements take place at ultrasonic frequencies. Were the model on grass, ground absorption would have been far too high. (The absorption coefficient of ground at 12 000 Hz in the model must match that of the real site at 1000 Hz.) We found that medium-density fiberboard provides a close proxy at 12 000 Hz.
The same reasoning explains why the stones need not be made of rock. Some of the model stones were three-dimensional printed plastic hollows, backfilled with concrete to make them heavy enough to reflect sound efficiently. Others were molded using a plaster–polymer mix. All were sealed with an automotive, cellulose spray paint to prevent sounds from penetrating surface pores. The approach was more than mere convenience. The time required to 3D print all 157 stones was estimated to take nine months.
We had to accurately create features of the model—the size, shape, and location of the stones—because sound from the henge primarily loses energy between the outer stones and into the sky. We drew on the latest archaeological evidence for the stone arrangements. Historic England, a public organization that helps protect the country’s historic environment, provided a computer model showing the geometry of reconstruction as Stonehenge appeared in 2200 BC, a time when its usage likely peaked. Those were the starting points for our physical model.
Flutes, horns, and drums
Getting recording equipment to work at broadband frequencies in the ultrasonic region is no easy task. In the absence of a compact omnidirectional source, we arranged four tweeters—each pointed outward on a square—inside the model. The speakers emitted frequencies up to 70 000 Hz that we could record. To characterize the space, we used a single microphone and incrementally moved it to 24 positions inside the henge and just outside its border. At each position we measured the speaker’s short, sharp impulses made elsewhere in the model.
Those recordings captured the sound directly from source to microphone, followed by the thousands of reflections that came from the stones. From the impulse responses, we calculated a series of parameters that relate to human perception. The first was reverberation time: how long it takes the sound to decay by 60 dB after the source is switched off. In our scale model of Stonehenge, the average midfrequency reverberation time was 0.64 ± 0.03 seconds. A large movie theater exhibits similar decay times.
For a space with no roof and spaces between the stones for sound to escape, that’s a remarkably long reverberation time. Reverberation occurs because horizontally propagating sound reflects repeatedly between the many stones. And although the time is significantly less than would be recommended for listening to today’s music, even a small amount of reverberation improves the perception of music across genres. Indeed, sound engineers describe reverberation as “aural ketchup” because it improves anything to which it’s added.
It is impossible to know what sounds our ancestors were making at Stonehenge, but musical instruments certainly existed when it was built. Archaeologists have evidence of ancient bone flutes, wooden pipes, animal horns, and drums from Neolithic Britain and Europe. And singing, almost certainly, would have been pervasive at the time—although that leaves no archaeological trace.
Another key parameter we analyzed was the amplification provided by the stones’ reflections. Across all the measurement positions, they amplified the sounds of speech by, on average, 4.3 dB. The smallest difference in level we can hear is about 1 dB, whereas a 10 dB increase is heard as a doubling in loudness. Thus the amplification in Stonehenge would have made communication easier and especially helpful if a speaker was facing away from the audience.
What’s more, the acoustic enhancement of amplification and reverberation happened only when speakers, music makers, and listeners were in the stone circle. Any sounds they created were best for others inside the structure rather than for a bigger audience outside, whose view of the interior would have been obscured. A large number of people were required to transport the stones and construct the monument, but apparently only a small number of people—possibly fewer than 50 within the central horseshoe of bluestones—were able or allowed to fully participate and witness rituals in the stone circle.
I appreciate the work of Bruno Fazenda and Susan Greaney for their collaboration on the project.
Additional resources
Trevor Cox (t.j.cox@salford.ac.uk) is a professor of acoustic engineering at the University of Salford in the UK.