A fundamental issue in spoken language comprehension involves understanding the interaction of linguistic representations across different levels of organization (e.g., phonological, lexical, syntactic, and semantic). In particular, there is debate about when different levels are accessed during spoken word recognition. Under serial processing models, comprehension is sequential. In contrast, under parallel processing models, simultaneous activation of representations at multiple levels can occur. The current study investigates this issue by isolating neural responses to syntactic class distinctions from acoustic and phonological responses. EEG data were collected in an event-related potential (ERP) experiment in which participants (N = 26) listened to words varying in syntactic class (nouns versus adjectives) that were controlled for low-level acoustic differences via cross-splicing. Machine learning techniques were used to decode syntactic class from ERP responses over time. Results showed that syntactic class is decodable approximately 160–190 ms after the average syntactic point of disambiguation in the words, during which listeners are still processing acoustic information. This supports the prediction that different levels of representation have overlapping timecourses. Overall, these results are consistent with a parallel, interactive processing model of spoken word recognition, in which higher-level information—such as syntactic class—is accessed while acoustic analysis is still occurring.
Skip Nav Destination
,
,
,
Article navigation
October 2022
Meeting abstract. No PDF available.
October 01 2022
Decoding syntactic class from EEG during spoken word recognition
McCall E. Sarrett;
McCall E. Sarrett
Psychol. and Brain Sci., Villanova Univ., Tolentine Hall 334, 800 E Lancaster Ave. Villanova, PA 19085, [email protected]
Search for other works by this author on:
Olivia Montañez;
Olivia Montañez
Psychol. and Brain Sci., Villanova Univ., Villanova, PA
Search for other works by this author on:
Joseph C. Toscano
Joseph C. Toscano
Villanova Univ., Villanova, PA
Search for other works by this author on:
McCall E. Sarrett
Alexa S. Gonzalez
Olivia Montañez
Joseph C. Toscano
Psychol. and Brain Sci., Villanova Univ., Tolentine Hall 334, 800 E Lancaster Ave. Villanova, PA 19085, [email protected]
J. Acoust. Soc. Am. 152, A59–A60 (2022)
Citation
McCall E. Sarrett, Alexa S. Gonzalez, Olivia Montañez, Joseph C. Toscano; Decoding syntactic class from EEG during spoken word recognition. J. Acoust. Soc. Am. 1 October 2022; 152 (4_Supplement): A59–A60. https://doi.org/10.1121/10.0015543
Download citation file:
178
Views
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
I can't hear you without my glasses
Tessa Bent
Related Content
Decoding the neural dynamics of within- and between-language lexical competition: A pediatric neurosurgical case report of a Spanish-English bilingual
J. Acoust. Soc. Am. (October 2022)
Children's syntactic parsing and sentence comprehension with a degraded auditory signal
J. Acoust. Soc. Am. (February 2022)
Does the semantic content or syntactic regularity of masker speech affect speech-on-speech recognition?
J. Acoust. Soc. Am. (December 2018)
Event-related potential responses reveal simultaneous processing across multiple levels of representation in spoken word recognition
J. Acoust. Soc. Am. (October 2016)
“Sounds like home”: The effect of listener on matching guise in regional accent on syntactic acceptability
J. Acoust. Soc. Am. (October 2022)