The world is full of sounds that carry information. A rushing stream, like the one shown in figure 1, can improve your mood and lowers stress.1 A clap of thunder alerts you to take cover. Unique sounds can remind you of home. Understanding how sounds influence behaviors and interactions with the environment is paramount to the field of acoustic ecology. It originated in the 1970s when researchers began exploring people’s awareness of sound as a response to the deteriorating listening environment from noise pollution.2 Now the field also has important applications in urban planning, musical composition, landscape architecture, animal behavior, and wildlife conservation.
Almost all animals possess an auditory sensory system that can detect and respond to sounds in nature. Such information provides them with an assessment of the environment and alerts them to the presence of threats, potential mates, and food.3 In some animals, the auditory sensory system has spectacular specialization. For example, a fox can hear the footsteps of a mouse—a potential meal—under three feet of snow (see figure 2), and a deer can detect the nearby rustle of leaf litter and run from a possible predator.
But sensitive hearing in a noisy environment can have weighty consequences. In modern times, human activities have introduced various novel acoustic stimuli. Temporary and permanent hearing damage can result from prolonged exposure to especially loud sounds. Even without damage, chronic exposure to low levels of noise pollution can degrade hearing abilities for people and wildlife. Sounds that otherwise would be heard are not.
Acoustic ecology is part of sensory ecology—a field that examines how animals, including humans, use information obtained from the environment in different aspects of their lives. Over the past few decades, acousticians, psychologists, neuroscientists, ecologists, and conservation biologists have expanded the understanding of how natural sounds mediate behavior, modify ecological interactions, and drive evolutionary patterns. Much of the acoustic-ecology literature continues to focus on human experiences: This article focuses more broadly on how animals interact with sound in their environment and the vital role that sound plays in ecosystems.
Like many fields that involve a diversity of disciplines and applications, acoustic ecology has terminology that can have ambiguous meaning when applied across specialties. Although the terms and definitions are valuable in the context in which they were developed, collectively they present challenges. For example, sound and noise are often used interchangeably to describe an acoustic source. A common definition of noise is unwanted sounds that interfere with a signal of interest. Noise, however, is not a purely subjective designation. Any sound that serves no function is noise. Most sounds produced by human transportation and other machinery are unintended, serve no function, and are therefore noise regardless of the listener’s attitude. Unintended sounds do exist in nature, like the footfalls of an animal, but such sounds provide vital cues for some animals and are considered sounds to the receiver and noise to the producer.
Numerous terms have been used to categorize different sources of sound. Importantly, the unique characteristics of each acoustic source can influence how it is perceived and what responses it receives. I prefer common ecological terminology to distinguish the types of acoustic sources. Abiotic sounds refer to those generated from the physical environment; biotic sounds, to ones made either intentionally or unintentionally from living organisms; and anthropogenic noise, to the unintended and functionless sounds from humans. All the sounds of a given place and time, and the factors that influence their transmission, make up an acoustic environment. How animals filter or perceive the sounds creates a soundscape.
The four broad categories of acoustic-ecology research and their important links are illustrated in figure 3. The first, sensory systems, investigates how acoustic cues are obtained by anatomical structures, processed by neurological pathways, and ultimately perceived by the listener. The second, acoustic environments, quantifies the condition and characteristics of sounds in an environment and the acoustic cues available to a listener. Understanding why and how sounds are advantageous to an animal falls into the third category, functions of sound. The fourth, the effects of noise, aims to understand how noise from human activities affects individual- and population-level responses and the inherent consequences to the ecosystem.
Specialized sensory systems
The sensory structures and neurological pathways associated with hearing are vast. The examples that follow serve as a simple primer to the rich literature that explores diverse and fascinating sensory systems. Many invertebrates and all classes of vertebrates can detect and process acoustic stimuli in their environments. The ability to hear typically refers to the detection of pressure waves, or the oscillating compressions and rarefactions of the medium, usually air or water. For example, humans, other vertebrates, and many invertebrates—including the most conspicuously acoustic crickets, katydids, grasshoppers, and cicadas—identify such waves with tympanal ears, thin membranes coupled to mechanosensory cells that transduce the membrane vibration into electrical impulses. Hearing systems in animals perform auditory tasks such as frequency analysis, sound-source localization, and auditory-scene analysis. Those capabilities, some acoustic ecologists argue, evolved early in vertebrates and have been modified by selection in different species.4
Given the wide range of frequency sensitivities across taxa, significant morphological variation exists in the mechanosensory machinery of vertebrates.3 For example, geckos have the most sensitive and frequency-selective hearing of all lizards.5 They are the only nocturnal lizards to produce sounds for communication. Unlike in other lizards, geckos have a papillae structure—the membrane of mechanosensory cells—with unique modifications that maximize the number of oscillating frequencies or potential channels of information.6
The barn owl has one of the best-studied source-localization capabilities. To hone in on the exact location of a sound—think a scurrying mouse in forest leaf litter at night—a barn owl processes the horizontal location by using the difference in the sound’s arrival to each ear and the vertical location by using the difference in sound levels between its asymmetrically placed ears.3 The horizontal and vertical locations are invaluable information to a nocturnal, aerial predator. For most other birds and mammals, the elevation of the sound source is nearly impossible to determine because the difference in arrival time and intensity is confounded. The multifaceted auditory capabilities of the barn owl are possible because of the morphology of its ears and neurological features, including auditory nerve encoding, similar to other avian species.
Many invertebrates can detect the particle-velocity component of a sound wave. They use flagellar mechanosensory structures, such as hairs or antennae, that project into the oscillatory flow. Such an acoustic sensory capability is lacking in humans. Because the oscillatory motion attenuates close to the source, some species of insects, including certain mosquitoes (Toxorhynchites brevipalpis), actively use the mechanosensory cells to increase detection of more distant sounds.6
Complex acoustic environments
The diversity of acoustic sensory systems is not surprising given the complexity and variety of acoustic stimuli in the environment. Those sounds shape how animals interact with their surroundings. Behavioral modifications driven by the acoustic environment can operate from short time scales, such as the few seconds it takes for a predator to capture its prey, to evolutionary time scales, in which deviations in behavior lead to speciation. Scientists characterize the acoustic environment, and the related soundscape, by capturing various acoustic features and the ambient conditions to understand how they may be interpreted by a receiver. Researchers addressing human hearing are developing standard analyses based on the psychoacoustic parameters of sound—loudness, roughness, sharpness, and tonality—to advance the field toward measuring and assessing the acoustic environment in relation to human perception.
Despite that progress, researchers lack a universal method or metric for quantifying acoustic environments.7 Accurately characterizing them is vital for interpreting acoustic cues available to people and animals and deciding acceptable sound levels for conservation efforts, urban planning, and product safety. The most common way to characterize sound is to measure the variation in pressure in a defined time period and frequency range and convert the values to the decibel scale. How the pressure in the given time period and frequency range is described depends on the specific metrics used, and there are many. That diversity of metrics has led to some confusion when comparing multiple studies. Furthermore, although the decibel, the most common unit for sound level, is useful for quantities that span several orders of magnitude, a value is not always directly comparable to another because it is a ratio of a measured pressure quantity to a reference. Because sound levels may include both the signal and the ambient conditions, scientists have more difficulty interpreting the meaning of sound-level measurements to a listener.
Understanding sensory systems can inform how best to characterize the acoustic environment. Most animals, including humans, have varying sensitivity to sounds at different frequencies. To quantify those differences and discriminate what sound is heard, researchers apply a frequency-weighting function to a measured sound. If the hearing thresholds for a species are known, the function adjusts sound levels based on specific hearing sensitivities. To assess the effects of anthropogenic noise, scientists have worked extensively on marine-mammal hearing in the field and in the lab to develop frequency-weighting functions and threshold levels.8
To determine when an acoustic sensory environment becomes degraded, researchers quantify the change to it from optimal acoustic conditions. Under natural, ambient acoustic conditions, an individual is in a listening area—a circular region whose radius is the distance at which an individual first detects a sound. Researchers can, therefore, use deviations from natural sound levels to estimate reductions in a listening area and quantify the loss of hearing opportunities in humans and wildlife. According to one study,9 which assumed that the detection range is limited primarily by spreading loss, the listening area can be reduced by 50% for each 3 db increase in noise above ambient conditions. Although the results provide a useful estimate of how the sensory environment is changing for organisms, signal detection by animals in complex acoustic environments is still an active area of research, and investigations continue into how noise degrades auditory capabilities.
Biotic and abiotic sounds
Biotic sounds are typically thought of as vocalizations specifically produced to attract mates, find food, alert others to nearby predators, and defend territory. One example is the calls male amphibians emit to attract mates. The intended listener is nearby, and a chorus of calls from multiple individuals provides a cue about habitat quality. In fact, in some amphibian species, an individual may prefer to move to a new mating habitat if others are already present. Those powerful choruses of biotic sound offer a potential method for amphibian-habitat restoration.
Like the amphibian chorus, biotic sounds may unintentionally provide cues for other species. One well-studied example is the coral reef, in which the biotic sounds emitted from fish, urchins, shrimps, and other animals are important settlement cues for planktonic larvae of fish and invertebrates (see figure 4a). The sounds indicate that the area is a suitable place to live.10 Larval fishes likely perceive acoustic cues through particle motion.11 To fully understand the function of a reef’s complex biotic sounds, researchers will need to improve measurements of such particle motion.
Abiotic sources of sound elevate background levels and create spatial and temporal variations that make it difficult for listeners to perceive acoustic signals. In terrestrial environments, wind, bodies of water, and rain are the dominant abiotic sounds; in marine environments, wind and associated surface agitation and rain dominate. The evolution of hearing sensitivities and communication-signal characteristics is likely driven by slow-varying abiotic sources such as flowing water rather than by more episodic sources like thunder claps. For example, fish species with enhanced auditory sensitivity across broad frequency ranges can adapt to quieter environments such as lakes; fish species with less-specialized hearing are found more commonly in fast-flowing aquatic habitats.
Animals can detect and perceive sound signals better if they are produced at a frequency range with less abiotic noise, a band that acoustic ecologists call the silent window.12 When a quiet frequency band isn’t available, animals can briefly modify an acoustic signal. In windy conditions, king penguins increase the number of calls they emit and the number of syllables per call to presumably increase the probability of detection. Another signal modification—known as the Lombard effect, first observed in humans and subsequently documented in various bird and mammal species—increases amplitudes in noisier conditions.13
In response to increased abiotic sounds, animals sometimes switch to or add another sensory modality to communicate. In conditions with high wind and waves, humpback whales breach the water’s surface and slap their pectoral fins rather than vocalize. The switch to primarily surface-generated visual and audio signals potentially improves their perception of important social cues in those noisier conditions. Over evolutionary time scales, frog populations inhabiting areas near fast-flowing streams and waterfalls performed more foot flagging—a visual mating behavior—than populations in naturally quieter habitats that mainly use vocalizations to attract mates, as illustrated in figure 4b. Those short- and long-term communication modifications provide important insights on how more recent human-induced noises likely affect different animals.
Noise alters animal behavior
The increasing human population has dramatically raised ambient sound levels.14 Noise from human activity is a recent evolutionary pressure that is becoming widespread, and it continually changes as people develop technology, for example. Wildlife responses to noise are well documented across various species, and our understanding of the consequences continues to grow.15
One way to isolate noise from the visual, chemical, and structural changes induced by human activity is to conduct playback experiments. In them, the noise is turned on and off to determine if it alone alters animal behaviors or ecological interactions. In one study, traffic noise hindered the hunting success of a bat species that relies on the incidental sounds from large, ground-running arthropods, as depicted in figure 4c. In response, an individual bat’s health may suffer, or the bat may abandon its prime habitat; less-successful hunting could reduce the population’s survival rate or force it to find a new habitat.
Furthermore, such individual- and population-level changes cascade through other biological communities. Birds pollinate plants and disperse seeds, but when noise disrupts their behavior, the plant community can shift across a landscape, and the consequences last long after the source of noise goes away.16
Acoustic ecology in conservation
The condition of acoustic environments is critical to ecological systems and shapes the quality of peoples’ visits to natural areas.9 A bird song alerts a visitor to a rare species. Footsteps of a bear signal to hikers to take precaution. Preserving opportunities to hear natural sounds is an important component of protecting wildlife, ecosystem functioning, and visitor experiences.
Developing effective acoustic conservation strategies requires an understanding of when and where particular sounds are most vital to wildlife and visitors. Additionally, knowing what species are predicted to be most vulnerable to noise and therefore will benefit most from preserving certain sounds may be key to developing sound-mitigation strategies. For example, weekend road closures in Rock Creek Park in Washington, DC, benefit both visitors who want to escape the buzz of the city and breeding songbirds that are producing calls to attract mates.17 Installing noise barriers around oil- and gas-extraction sites returns rural landscapes to nearly natural acoustic conditions, and, unsurprisingly, the biological community benefits.18
Another aspect of acoustic ecology is worthy of mention: our cultural heritage. Many human sounds are intrinsic to a given place and are thus protected for their historical and cultural value. Such sounds, including the clang of mission bells, Native American drumming, and the crack of musket fire, can immerse the listener in a cultural experience and connect them to a time or place they otherwise would never encounter. Preserving those unique sounds and enhancing and creating opportunities to hear cultural sounds, without modern-day noise intrusions, is invaluable to our rapidly changing society.
Area protection provides an opportunity to implement acoustic conservation strategies, but urban planners are applying acoustic-ecology concepts to create natural listening experiences. Perhaps the greatest benefit will be achieved through human connection and awareness of sounds in our own backyards. Typically, city planners and policymakers do not engage with residents about their acoustic sensory experiences, but the perception of urban landscapes is shifting from that of barren ecological settings to places of wonder.
The hoot of an owl, the howl of a coyote, and the song of a bird connect people to the natural world and are signals of a thriving ecosystem. Multidisciplinary approaches, including understanding the interactions between many sensory systems, and cross-disciplinary collaborations are needed to fully recognize and mitigate the damaging effects of dramatically changing acoustic experiences. By integrating landscape architecture, ecology, acoustics, psychology, and innovative design, future city planners will design more sustainable cities for healthier citizens—both people and wildlife.
Megan McKenna is an acoustic biologist at the natural sounds and night skies division of the National Park Service in Fort Collins, Colorado.