This simulation study presents the integration of an articulatory factor into the Context Sequence Model (CSM) (Wade et al., 2010) of speech production using Polish sonorant data measured with the Electromagnetic Articulograph technology (EMA) (Mücke et al., 2010). Based on exemplar-theoretic assumptions (Pierrehumbert 2001), the CSM models the speech production-perception loop operating on a sequential, detail-rich memory of previously processed speech utterance exemplars. Selection of an item for production is based on context matching, comparing the context of the currently produced utterance with the contexts of stored candidate items in memory. As demonstrated by Wade et al. (2010), the underlying exemplar weighing for speech production is based on about 0.5s of preceding acoustic context and following linguistic match of the exemplars. We extended the CSM by incorporating articulatory information in parallel to the acoustic representation of the speech exemplars. Our study demonstrates that memorized raw articulatory information—movement habits of the speaker—can also be utilized during speech production. Successful incorporation of this factor shows that not only acoustic but also articulatory information can be made directly available in a speech production model.
Acoustic and articulatory information as joint factors coexisting in the context sequence model of speech production
Daniel Duran, Jagoda Bruni, Grzegorz Dogil; Acoustic and articulatory information as joint factors coexisting in the context sequence model of speech production. Proc. Mtgs. Acoust. 2 June 2013; 19 (1): 060091. https://doi.org/10.1121/1.4799009
Download citation file: