In a musical ensemble, performers try to synchronize to a governing tempo by resolving differences in sound-onset timing from individual players even without a conductor's cue. Listeners and players alike must construct an internalized sense of when the beat occurs and adapt to that information dynamically as the performance goes on. Here, we examined this process by simulating individual sound onset timings with an ensemble of 40 virtual “metronomes” around 90 bpm with which we asked listeners to tap along for an approximately 10-beat duration. We manipulated coupling strength at five levels (very-weak, weak, medium, strong, perfect) where stronger coupling corresponds to a more definitively periodic beat. The inter tap interval (ITI) from 8 subjects were analyzed in three segments of the trial duration [early (tap 1–3), middle (4–6), and late (7–9)]. Also, the phase coherence of taps between listeners was compared to the stimulus density. Stronger coupling resulted in more stable ITI, while ITI became shorter in later segments for the medium and weak conditions. Interestingly, taps coincided with the greatest stimulus density for weaker coupling, whereas taps led ahead for stronger coupling. The results suggest that listeners could maintain a collective beat perception but less anticipatorily for less-synchronized sounds.
Skip Nav Destination
Article navigation
October 2019
Meeting abstract. No PDF available.
October 01 2019
Extracting beat from a crowd of loosely coupled, concurrent periodic stimuli
Nolan V. Lem;
Nolan V. Lem
Ctr. for Comput. Res. in Music and Acoust. (CCRMA), Stanford Univ., 44 Olmsted Rd. Apt. 244, Stanford, CA 94305, [email protected]
Search for other works by this author on:
Takako Fujioka
Takako Fujioka
Ctr. for Comput. Res. in Music and Acoust. (CCRMA), Stanford Univ., Stanford, CA
Search for other works by this author on:
J. Acoust. Soc. Am. 146, 3049 (2019)
Citation
Nolan V. Lem, Takako Fujioka; Extracting beat from a crowd of loosely coupled, concurrent periodic stimuli. J. Acoust. Soc. Am. 1 October 2019; 146 (4_Supplement): 3049. https://doi.org/10.1121/1.5137568
Download citation file:
103
Views
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Rapid detection of fish calls within diverse coral reef soundscapes using a convolutional neural network
Seth McCammon, Nathan Formel, et al.
Related Content
An examination of articulatory skill in monolinguals and multilinguals: A tongue twister experiment
J. Acoust. Soc. Am. (October 2020)
How human listeners make sense of crowded acoustic scenes
J Acoust Soc Am (October 2021)
Distributed neural systems for musical time processing
J Acoust Soc Am (September 2018)
Using a production-center methodology to probe syllabic constituency
J. Acoust. Soc. Am. (March 2023)
Human-technology interfaces with the tactile metronome
J Acoust Soc Am (September 2018)