Studies of language learning in adulthood show that learners’ native language phonologies shape their non-native perception and production abilities. Nonetheless, listeners are able to learn to perceive new speech sound contrasts given training. The present study attempts to model how foreign consonants are perceived prior to training or second language study. Brain responses were recorded during passive listening using electroencephalography (EEG), using twelve monolingual English-native listeners and isolated consonant-vowel (CV) syllables read by speakers of several languages (Dutch, English, Hungarian, Hindi, Swahili). To model native-language phonology, EEG responses to native (English) syllables were used to train classifiers based on phonological feature labels for the consonants in each syllable. The trained classifiers were then applied to the EEG responses to foreign syllables, and the classifier outputs used to generate confusion probabilities between each pair of foreign and native consonants for each of the four foreign languages. Confusion matrices based on the EEG classifiers compare favorably to confusion matrices based solely on number of phonological feature mismatches. Ongoing work investigates whether optimal phonological features can be learned from the EEG data using clustering algorithms.
Skip Nav Destination
Article navigation
October 2016
Meeting abstract. No PDF available.
October 01 2016
Modeling native phonology and non-native speech perception using electroencephalography signals Free
Daniel McCloy;
Daniel McCloy
Inst. for Learning and Brain Sci., Univ. of Washington, Box 357988, Seattle, WA 98115-7988, [email protected]
Search for other works by this author on:
Adrian K. Lee
Adrian K. Lee
Inst. for Learning and Brain Sci., Univ. of Washington, Box 357988, Seattle, WA 98115-7988, [email protected]
Search for other works by this author on:
Daniel McCloy
Inst. for Learning and Brain Sci., Univ. of Washington, Box 357988, Seattle, WA 98115-7988, [email protected]
Adrian K. Lee
Inst. for Learning and Brain Sci., Univ. of Washington, Box 357988, Seattle, WA 98115-7988, [email protected]
J. Acoust. Soc. Am. 140, 3337 (2016)
Citation
Daniel McCloy, Adrian K. Lee; Modeling native phonology and non-native speech perception using electroencephalography signals. J. Acoust. Soc. Am. 1 October 2016; 140 (4_Supplement): 3337. https://doi.org/10.1121/1.4970646
Download citation file:
91
Views
Citing articles via
Climatic and economic fluctuations revealed by decadal ocean soundscapes
Vanessa M. ZoBell, Natalie Posdaljian, et al.
Variation in global and intonational pitch settings among black and white speakers of Southern American English
Aini Li, Ruaridh Purse, et al.
The contribution of speech rate, rhythm, and intonation to perceived non-nativeness in a speaker's native language
Ulrich Reubold, Robert Mayr, et al.
Related Content
Neurophysiological correlates of perceptual learning of Mandarin Chinese lexical tone categories: An event-related potential study
J. Acoust. Soc. Am. (April 2015)
Vowel‐to‐vowel coarticulation in Swahili
J. Acoust. Soc. Am. (August 2005)
Native phonological processing abilities predict post-consolidation nonnative contrast learning in adults
J. Acoust. Soc. Am. (December 2017)
Training the perception of Hindi dental and retroflex stops by native speakers of American English and Japanese
J. Acoust. Soc. Am. (March 2006)