Adults’ categorization of speech sounds is influenced by different aspects of lexical and phonological organization. These include neighborhood density and biphone probability. The role of both these influences in models of speech recognition has been a topic of debate. This is in part because these two effects are difficult to disambiguate given the measures are highly correlated: denser neighborhoods tend to have high biphone probability sequences. Accordingly, interactive models can account for neighborhood density effects based on lexical feedback, with biphone probability effects as their by-product. Conversely, in the absence of feedback from the lexicon, autonomous models can explain density effects using biphone probabilities alone. We present two experiments testing cases which disambiguate these effects. In Experiment 1, listeners categorized re-synthesized /ɛ/∼/æ/ continua. Vowel continua were presented in CVC frames, both endpoints of which were English non-words. Crucially, we manipulated the neighborhood density and biphone probabilities of each endpoint of the continuum independently. Our results show an independent contribution of neighborhood density and biphone probability in categorization. In Experiment 2, we are now using eye-tracking to disentangle the time course of each effect. Results will be discussed in the context of the role of feedback in speech recognition.
Skip Nav Destination
Article navigation
October 2019
Meeting abstract. No PDF available.
October 01 2019
Biphone probability and neighborhood density effects on phonetic categorization
Jeremy Steffman;
Jeremy Steffman
Linguist, UCLA, 3125 Campbell Hall, Los Angeles, CA 90095-1543, jsteffman@g.ucla.edu
Search for other works by this author on:
Megha Sundara
Megha Sundara
Linguist, UCLA, Los Angeles, CA
Search for other works by this author on:
J. Acoust. Soc. Am. 146, 3056 (2019)
Citation
Jeremy Steffman, Megha Sundara; Biphone probability and neighborhood density effects on phonetic categorization. J. Acoust. Soc. Am. 1 October 2019; 146 (4_Supplement): 3056. https://doi.org/10.1121/1.5137601
Download citation file:
82
Views
Citing articles via
Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languages
Maïa Ponsonnet, Christophe Coupé, et al.
The alveolar trill is perceived as jagged/rough by speakers of different languages
Aleksandra Ćwiek, Rémi Anselme, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Short-term exposure alters adult listeners' perception of segmental phonotactics
JASA Express Lett. (December 2023)
Spatio-temporal analysis of irregular vocal fold oscillations: Biphonation due to desynchronization of spatial modes
J Acoust Soc Am (December 2001)
Evoking biphone neighborhoods with verbal transformations: Illusory changes demonstrate both lexical competition and inhibition
J. Acoust. Soc. Am. (February 2008)
A linguistically‐informative approach to dialect recognition using dialect‐specific context‐dependent phonetic models.
J Acoust Soc Am (October 2009)
The role of distributional information in cross-talker generalization of phonetic category retuning
J Acoust Soc Am (September 2018)