Listeners are sensitive to correlations among the multiple probabilistic acoustic cues that define speech categories. In English stop consonant productions, for example, fundamental frequency (f0) is strongly correlated with voicing. Reflecting this regularity, perception of syllables varying in voice onset time is shifted with changes in f0. Such sensitivity to the long‐term regularities of a language must be balanced with enough flexibility that speech perception is able to accommodate deviations from these regularities, such as those that may arise variations from idiolect, dialect, or accented speech. The present experiments manipulate short‐term acoustic cue correlations experienced in online perception to investigate the interplay between sensitivity to long‐term and short‐term acoustic cue correlation. Using overt categorization and eyetracking, the present results indicate that speech categorization is influenced by local shifts in acoustic cue correlations that deviate from long‐term regularities of English. These experiments further examine the time course of this short‐term learning and the degree to which it generalizes. The data suggest that listeners continually monitor speech input for regularity and tune online speech perception in relation to these regularities.
Skip Nav Destination
Article navigation
April 2009
Meeting abstract. No PDF available.
April 08 2009
Measuring sensitivity to short‐term deviations in acoustic cue correlation.
Kaori Idemaru;
Kaori Idemaru
Dept. of East Asian Lang. and Lit.s, Univ. of Oregon, Eugene, OR 97403
Search for other works by this author on:
Lori L. Holt
Lori L. Holt
Carnegie Mellon Univ., Pittsburgh, PA 15213
Search for other works by this author on:
J. Acoust. Soc. Am. 125, 2753 (2009)
Citation
Kaori Idemaru, Lori L. Holt; Measuring sensitivity to short‐term deviations in acoustic cue correlation.. J. Acoust. Soc. Am. 1 April 2009; 125 (4_Supplement): 2753. https://doi.org/10.1121/1.4784611
Download citation file:
70
Views
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Rapid detection of fish calls within diverse coral reef soundscapes using a convolutional neural network
Seth McCammon, Nathan Formel, et al.
Related Content
On idiolectal differences in speaking rate: A comparison of spontaneous and read speech
J Acoust Soc Am (May 2000)
How do non-native phonemes impact learning words in a second language? Evidence from eyetracking and EEG in a laboratory word learning study
J Acoust Soc Am (October 2022)
Measuring the contribution of formant frequency targets, bandwidths, and transitional timing to the perception of a vowel idiolect
J Acoust Soc Am (August 2005)
Talker‐specific accent: Can speech alignment reveal idiolectic influences during the perception of accented speech?
J Acoust Soc Am (March 2010)
On the perception of qualitative and phonetic similarities among voices
J Acoust Soc Am (May 1998)