Sound contrasts are redundantly cued in the speech stream by acoustic features spanning various time scales. Listeners are presented with evidence for a particular category at various temporal intervals, and must coalesce this information into a coherent percept to accurately achieve recognition. Previous work on tone languages has shown that listeners prioritize consonants, then vowels, then lexical tone during phonological and word processing, despite lexical tone being a suprasegmental cue that unfolds with the vowel. We present an online eye-tracking study to assess the time course of Cantonese listeners' recognition of a target word (e.g., 包 /pa⌢u55/ `bun') with competitors for rime (北 /pak55/ `north'), onset (敲 /ha⌢u55/ `to knock'), and tone (爆 /pa⌢uu33/ `to explode') co-present on the screen. This design allows a test of the role of relative prioritization and contribution of consonant, vowel, and tone information in phonological processing. If vowels are prioritized before tones, we predict increased looking times to tone competitors. If vowels and tones are processed jointly, we predict equal looking times to both vowel and tone competitors. Data collection with Gorilla is ongoing. Data analysis will focus on overall proportions of looking-time to the target and competitors.
Skip Nav Destination
Article navigation
October 2021
Meeting abstract. No PDF available.
October 01 2021
The prioritization of consonants, vowels, and tone in Cantonese word recognition
Fion Fung;
Fion Fung
Linguist., Univ. of BC, Vancouver, BC, Canada
Search for other works by this author on:
Rachel Soo;
Rachel Soo
Linguist., Univ. of BC, 100 St. George St., Toronto, ON M5S 3G3, Canada, [email protected]
Search for other works by this author on:
Molly Babel
Molly Babel
Linguist., Univ. of BC, Vancouver, BC, Canada
Search for other works by this author on:
J. Acoust. Soc. Am. 150, A310 (2021)
Citation
Fion Fung, Rachel Soo, Molly Babel; The prioritization of consonants, vowels, and tone in Cantonese word recognition. J. Acoust. Soc. Am. 1 October 2021; 150 (4_Supplement): A310. https://doi.org/10.1121/10.0008391
Download citation file:
110
Views
Citing articles via
All we know about anechoic chambers
Michael Vorländer
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Performance study of ray-based ocean acoustic tomography methods for estimating submesoscale variability in the upper ocean
Etienne Ollivier, Richard X. Touret, et al.
Related Content
Sharing the beginning versus sharing the end: Spoken word recognition in the visual world paradigm in Japanese
J Acoust Soc Am (October 2011)
Quantitative, not qualitative, differences in word classification errors as a function of language dominance
JASA Express Lett. (November 2023)
Spectral contrast reduction in Australian English /l/-final rimes
J. Acoust. Soc. Am. (February 2021)
Dances of the tongue: Temporal cues and temporal context in the production and perception of vowels
J Acoust Soc Am (April 2014)
Acoustic correlates of L2 Spanish judgments of accentedness and comprehensibility: A mixed-effects modeling approach
J Acoust Soc Am (March 2018)