We aim to develop a framework for the analysis of phonetic contrast systems that is fundamentally lexical and does not depend on assumptions of inventory homogeneity and independence of distribution in words and higher-order systems. Previously (Redmon and Jongman, 2018, JASA) we reported results of an open-class identification experiment on a 240-word sample of the 26,793-word single-speaker database in Tucker et al. (2018). Here, we present results of the second experiment in the project: a 2AFC task where the choice set is limited to obstruent-contrastive minimal pairs. This task forms the opposite end of a continuum from least restricted utilization of acoustic or higher-order information (Exp. 1), to localized attention to a particular contrast in the lexicon. Just as the first experiment provided estimates of a lower bound on listeners’ sensitivity to different cues in the signal, the results of this experiment provide an upper bound on those estimates. Participants were presented with 480 stimuli balanced between contrastive obstruents in #CV, VCV, and VC# positions. The results were then used to determine network edge weights on a phonological lexicon on the model of Vitevitch (2008), which emphasizes the interaction between acoustic features, neighborhood topology, and higher-order information in the lexicon.
Skip Nav Destination
Article navigation
March 2019
Meeting abstract. No PDF available.
March 01 2019
Lexically dependent estimation of acoustic information in speech II: Minimal pair confusability
Charles Redmon;
Charles Redmon
Dept. of Linguist, Univ. of Kansas, 1541 Lilac Ln., Rm. 427, Lawrence, KS 66046, redmon@ku.edu
Search for other works by this author on:
Allard Jongman
Allard Jongman
Dept. of Linguist, Univ. of Kansas, 1541 Lilac Ln., Rm. 427, Lawrence, KS 66046, redmon@ku.edu
Search for other works by this author on:
J. Acoust. Soc. Am. 145, 1915–1916 (2019)
Citation
Charles Redmon, Allard Jongman; Lexically dependent estimation of acoustic information in speech II: Minimal pair confusability. J. Acoust. Soc. Am. 1 March 2019; 145 (3_Supplement): 1915–1916. https://doi.org/10.1121/1.5101955
Download citation file:
32
Views
Citing articles via
Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languages
Maïa Ponsonnet, Christophe Coupé, et al.
The alveolar trill is perceived as jagged/rough by speakers of different languages
Aleksandra Ćwiek, Rémi Anselme, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Lexically dependent estimation of acoustic information in speech
J Acoust Soc Am (September 2018)
Lexically dependent estimation of acoustic information in speech III: Cross-splicing verification of cue weights
J. Acoust. Soc. Am. (October 2019)
Acoustic classification of velar fricatives in Assamese
J Acoust Soc Am (April 2016)
Perceptual metathesis of obstruent clusters
J Acoust Soc Am (May 2001)
Using acoustic distance and acoustic absement to quantify lexical competition
J. Acoust. Soc. Am. (February 2022)