In two prior studies (Redmon and Jongman, 2018, 2019, JASA), results of open- and closed-class word recognition tasks on items drawn from a large single-speaker database (Tucker et al., 2018) were used to train a model where acoustic cue weights were optimized to distinguish words in the lexicon, rather than a balanced inventory of phones. From that work, cues were identified that had a greater weight when considering the lexicon as a whole than when studying a symmetric set of contrasts in controlled syllable productions. To verify the causal role of such cues in word recognition, two new cross-splicing versions of the open- and closed-class tasks were run with a subset of items in each prior experiment. In each, an enhancement condition was created by cross-splicing diphones from items in the database with more distinct values on a given cue dimension, and consequently greater model-predicted accuracy. For each such item, a parallel reduction condition was created wherein accuracy was predicted to decrease due to the cross-splicing of a diphone with a more ambiguous value of a given cue. Results will serve to validate the relative role of the different cues in a manner that is external to the model-fitting procedure.
Skip Nav Destination
Article navigation
October 2019
Meeting abstract. No PDF available.
October 01 2019
Lexically dependent estimation of acoustic information in speech III: Cross-splicing verification of cue weights
Charles Redmon;
Charles Redmon
Linguist, Univ. of Kansas, 1541 Lilac Ln., Rm 427, Lawrence, KS 66046, [email protected]
Search for other works by this author on:
Allard Jongman
Allard Jongman
Linguist, Univ. of Kansas, Lawrence, KS
Search for other works by this author on:
J. Acoust. Soc. Am. 146, 3055 (2019)
Citation
Charles Redmon, Allard Jongman; Lexically dependent estimation of acoustic information in speech III: Cross-splicing verification of cue weights. J. Acoust. Soc. Am. 1 October 2019; 146 (4_Supplement): 3055. https://doi.org/10.1121/1.5137595
Download citation file:
74
Views
Citing articles via
All we know about anechoic chambers
Michael Vorländer
Day-to-day loudness assessments of indoor soundscapes: Exploring the impact of loudness indicators, person, and situation
Siegbert Versümer, Jochen Steffens, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Lexically dependent estimation of acoustic information in speech II: Minimal pair confusability
J Acoust Soc Am (March 2019)
Lexically dependent estimation of acoustic information in speech
J Acoust Soc Am (September 2018)
Unfolding of phonetic information over time: A database of Dutch diphone perception
J Acoust Soc Am (January 2003)
Using acoustic distance and acoustic absement to quantify lexical competition
J. Acoust. Soc. Am. (February 2022)
Speech recognition research at AT&T Bell Laboratories
J Acoust Soc Am (August 2005)