Effective communication depends not only on lexical content but also on how language is produced and perceived. Prosodic elements such as intensity, pitch accent (i.e., the pattern of low and high tones used in a stressed word), and intonation, have been suggested to aid in conveying emotional affect (e.g., happiness) in acted speech. For example, the word “yes” produced with a falling tone, high intensity, and large pitch range may indicate a high-arousal emotion like anger, whereas the same word produced with a flatter intonation, and lower intensity may indicate a low-arousal emotion like gloominess. Understanding the emotion with which words are produced facilitates appropriate communication because it tells the interlocutor how to best respond. Although this is intuitively true, the role of prosody in conveying emotional meaning is understudied, particularly in natural speech. This project thus utilizes StoryCorps, an extensive corpus of naturalistic interviews (publicly available at www.storycorps.org), to acoustically analyze pitch-accent usage in speech conveying different emotional affects. Portions of the StoryCorps interviews that convey overt emotional effect are selected and transcribed, and f0 trajectory, pitch range, duration, and intensity are tracked across stressed vowels to explore whether there are distinctive pitch-accent patterns used to convey different emotions.
Skip Nav Destination
Article navigation
March 2019
Meeting abstract. No PDF available.
March 01 2019
The acoustics of feeling: Emotional prosody in the StoryCorps corpus
Rachel M. Olsen
Rachel M. Olsen
Linguist, Univ. of Georgia, 142 Gilbert Hall, Athens, GA 30602-6205, rmm75992@uga.edu
Search for other works by this author on:
J. Acoust. Soc. Am. 145, 1930 (2019)
Citation
Rachel M. Olsen; The acoustics of feeling: Emotional prosody in the StoryCorps corpus. J. Acoust. Soc. Am. 1 March 2019; 145 (3_Supplement): 1930. https://doi.org/10.1121/1.5102025
Download citation file:
135
Views
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Recommendations on bioacoustical metrics relevant for regulating exposure to anthropogenic underwater sound
Klaus Lucke, Alexander O. MacGillivray, et al.
Related Content
Variation in stop releases in American English spontaneous speech.
J Acoust Soc Am (October 2010)
Comparing theory, consensus, and perception to the acoustics of emotional speech
J Acoust Soc Am (September 2018)
Unsupervised joint prosody labeling and modeling for Mandarin speech
J. Acoust. Soc. Am. (February 2009)
Automated extraction of prosodic features and analysis for emotional states.
J Acoust Soc Am (October 2009)
Improving the accuracy of speech emotion recognition using acoustic landmarks and Teager energy operator features
J Acoust Soc Am (April 2015)