Speech and song are universal forms of vocal expression that reflect distinct channels of communication. While these two forms of expression share a common means of sound production, differences in the acoustic properties of speech and song have not received significant attention. Here, we present evidence of acoustic differences in the speaking and singing voice. Twenty-four actors were recorded while speaking and singing different statements with five emotions, two emotional intensities, and two repetitions. Acoustic differences of speech and song were reported in several acoustic parameters, including vocal loudness, spectral properties, and vocal quality. Interestingly, emotion was conveyed similarly in many acoustic features across speech and song. These results demonstrate the entwined nature of speech and song, and provide evidence in support of the shared emergence of speech and song as a form of early proto-language. These recordings form part of our new Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) that will be freely released in 2013.
Skip Nav Destination
Article navigation
May 2013
Meeting abstract. No PDF available.
May 01 2013
Acoustic differences in the speaking and singing voice
Steven R. Livingstone;
Steven R. Livingstone
Psychology, Ryerson Univ., 350 Victoria St., Toronto, ON M5B 2K3, [email protected]
Search for other works by this author on:
Katlyn Peck;
Katlyn Peck
Psychology, Ryerson Univ., 350 Victoria St., Toronto, ON M5B 2K3, [email protected]
Search for other works by this author on:
Frank A. Russo
Frank A. Russo
Psychology, Ryerson Univ., 350 Victoria St., Toronto, ON M5B 2K3, [email protected]
Search for other works by this author on:
J. Acoust. Soc. Am. 133, 3591 (2013)
Citation
Steven R. Livingstone, Katlyn Peck, Frank A. Russo; Acoustic differences in the speaking and singing voice. J. Acoust. Soc. Am. 1 May 2013; 133 (5_Supplement): 3591. https://doi.org/10.1121/1.4806630
Download citation file:
Citing articles via
All we know about anechoic chambers
Michael Vorländer
Day-to-day loudness assessments of indoor soundscapes: Exploring the impact of loudness indicators, person, and situation
Siegbert Versümer, Jochen Steffens, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Designing a multimodal emotion recognition system using visual and audio modalities
AIP Conf. Proc. (November 2024)
Speech emotion recognition using both deep learning and machine learning techniques
AIP Conference Proceedings (November 2022)
Analysis and modeling for modification of speaking to karaoke-style singing
J. Acoust. Soc. Am. (October 2019)
Understanding the structure of humpback whale songs (L)
J. Acoust. Soc. Am. (November 2012)
Automatic discrimination between singing and speaking voices for a flexible music retrieval system
J Acoust Soc Am (November 2006)