In recent years, computational modeling has proved to be an essential tool for investigating cognitive processes underlying speech perception (see, e.g., Scharenborg & Boves, 2010). Here we address the question of how an end-to-end computational model that uses the acoustic signal as input simulates behavioral responses of actual participants. We used the Massive Auditory Lexical Decision (MALD) database recordings comprising of 26,800 isolated words produced by a single male native speaker of English. MALD response data came from 232 native speakers of English, with each participant responding to a subset of recorded words in an auditory lexical decision experiment (Tucker et al., submitted). We applied DIANA, a recently developed end-to-end computational model of word perception (Ten Bosch et al., 2013; Ten Bosch et al., 2015) to model the MALD response latency data. DIANA is a model that takes in the acoustic signal as input, activates internal word representations without assuming prelexical categorical decision, and outputs estimated response latencies and lexicality judgements. We report the results of the participant-to-model comparison, and discuss the simulated between-word competition as a function of time in the DIANA model.
Skip Nav Destination
Article navigation
October 2017
Meeting abstract. No PDF available.
October 01 2017
Computational modeling of human isolated auditory word recognition using DIANA
Filip Nenadic;
Filip Nenadic
Linguist, Univ. of Alberta, Edmonton, AB, Canada
Search for other works by this author on:
Louis ten Bosch;
Louis ten Bosch
Radboud Univ., Nijmegen, Netherlands
Search for other works by this author on:
Benjamin V. Tucker
Benjamin V. Tucker
Linguist, Univ. of Alberta, 4-32 Assiniboia Hall, Edmonton, AB T6G 2E7, Canada, [email protected]
Search for other works by this author on:
J. Acoust. Soc. Am. 142, 2704 (2017)
Citation
Filip Nenadic, Louis ten Bosch, Benjamin V. Tucker; Computational modeling of human isolated auditory word recognition using DIANA. J. Acoust. Soc. Am. 1 October 2017; 142 (4_Supplement): 2704. https://doi.org/10.1121/1.5014864
Download citation file:
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Rapid detection of fish calls within diverse coral reef soundscapes using a convolutional neural network
Seth McCammon, Nathan Formel, et al.
Related Content
Auditory lexical decision in the wild
J Acoust Soc Am (September 2018)
The massive auditory lexical decision database: Acoustic analyses of a large-scale, single speaker corpus
J. Acoust. Soc. Am. (October 2020)
Frequency and context prominence effects on English allophone perception: Identification of /r/ and / θ /
J. Acoust. Soc. Am. (October 2024)
The lexical bias in older adults’ compensation to altered auditory feedback
J. Acoust. Soc. Am. (October 2019)
The time course of recognition of reduced disyllabic Japanese words: Evidence from pupillometry with a Go-NoGo task
J Acoust Soc Am (September 2018)