By controlling the parameters of an articulatory synthesizer [C. H. Coker, Proc. IEEE 64, 452–459 (1976)] with an unconstrained optimization technique, the synthesizer can be made to adapt to and emulate natural utterances. For ten simple English vowels in steady state we have observed convergence of the first three formants to within a few hertz of those of human speech in fewer than 25 iterations. In addition, the corresponding positions of the articulators are physiologically reasonable. We are currently studying methods for controlling the articulatory dynamics of the synthesizer in order to imitate temporally varying utterances. Details of the adaptation algorithm and experimental results are given. Finally the relevance of this study to problems in speech recognition, synthesis, and transmission is outlined.
Skip Nav Destination
,
Article navigation
November 1980
August 11 2005
Adaptive emulation of human speech by an articulatory speech synthesizer
S. E. Levinson;
S. E. Levinson
Acoustics Research Department, Bell Laboratories, Murray Hill, NJ 07974
Search for other works by this author on:
C. E. Schmidt
C. E. Schmidt
Acoustics Research Department, Bell Laboratories, Murray Hill, NJ 07974
Search for other works by this author on:
S. E. Levinson
C. E. Schmidt
Acoustics Research Department, Bell Laboratories, Murray Hill, NJ 07974
J. Acoust. Soc. Am. 68, S18 (1980)
Citation
S. E. Levinson, C. E. Schmidt; Adaptive emulation of human speech by an articulatory speech synthesizer. J. Acoust. Soc. Am. 1 November 1980; 68 (S1): S18. https://doi.org/10.1121/1.2004611
Download citation file:
30
Views
Citing articles via
Focality of sound source placement by higher (ninth) order ambisonics and perceptual effects of spectral reproduction errors
Nima Zargarnezhad, Bruno Mesquita, et al.
Related Content
Real‐time speech synthesis by rule
J. Acoust. Soc. Am. (August 2005)
Avoiding unwanted variations of low‐frequency signal level in the output of a parallel formant synthesizer
J. Acoust. Soc. Am. (August 2005)
Intelligibility of consonants produced by dyadic rule synthesis
J. Acoust. Soc. Am. (August 2005)
Familiar and unfamiliar speaker recognition assessment and system emulation for cochlear implant users
J. Acoust. Soc. Am. (February 2023)
Articulatory-based speaker recognition
J. Acoust. Soc. Am. (May 2013)