Emotional prosody is an important social cue that conveys the speaker's intention. Proper recognition of emotional prosody facilitates the interpretation of spoken language. Previous event-related potential (ERP) studies successfully documented an early negative (mismatch negativity, MMN) and a later positive (P3a) involuntary neural responses to detecting a change in emotional speech prosody (Schirmer et al., 2005; Zora et al., 2019). Nonetheless, these ERP components were elicited by controlling the linguistic content of the speech stimuli. It remains unclear whether natural affective prosody across varying linguistic carriers would elicit similar activation patterns. The current study adopted the multi-feature oddball paradigm to investigate the ERP responses to three basic emotional prosodiesâhappy, angry, and sadâembedded in varying monosyllabic English words. Thirty-three adult listeners (female = 23) completed the experiment. Unlike the previous reports, there was not a clear MMN response to the detection of emotional prosody patterns in the stimuli. But a late positive response (LPR) component was observed frontal-centrally in response to changes in affective prosody. Linear mixed-effect models further confirmed the presence of significantly larger LPRs to happy prosody than angry or sad prosody, suggesting that the LPR is a more reliable neural marker for emotional prosody recognition.
Skip Nav Destination
Article navigation
October 2020
Meeting abstract. No PDF available.
October 01 2020
Late positive response indexes neural sensitivity to emotional prosody differences in spoken words
Chieh Kao;
Chieh Kao
Speech-Language-Hearing Sci., Univ. of Minnesota, 164 Pillsbury Dr. SE, Minneapolis, MN 55455, kaoxx096@umn.edu
Search for other works by this author on:
Yang Zhang
Yang Zhang
Speech-Language-Hearing Sci., Univ. of Minnesota, Minneapolis, MN
Search for other works by this author on:
J. Acoust. Soc. Am. 148, 2506–2507 (2020)
Citation
Chieh Kao, Yang Zhang; Late positive response indexes neural sensitivity to emotional prosody differences in spoken words. J. Acoust. Soc. Am. 1 October 2020; 148 (4_Supplement): 2506–2507. https://doi.org/10.1121/1.5146964
Download citation file:
48
Views
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Co-speech head nods are used to enhance prosodic prominence at different levels of narrow focus in French
Christopher Carignan, Núria Esteve-Gibert, et al.
Source and propagation modelling scenarios for environmental impact assessment: Model verification
Michael A. Ainslie, Robert M. Laws, et al.
Related Content
Beyond lexical meaning: The effect of emotional prosody on spoken word recognition
J. Acoust. Soc. Am. (July 2017)
The development of emotional speech prosody perception in 3- to 14-month infants: a preferential listening study
J Acoust Soc Am (March 2019)
Emotional speech prosody perception in the first year of life
J. Acoust. Soc. Am. (October 2020)
Effects of emotional prosody on word recognition
J Acoust Soc Am (November 2013)
The acoustics of feeling: Emotional prosody in the StoryCorps corpus
J Acoust Soc Am (March 2019)