By controlling the parameters of an articulatory synthesizer [C. H. Coker, Proc. IEEE 64, 452–459 (1976)] with an unconstrained optimization technique, the synthesizer can be made to adapt to and emulate natural utterances. For ten simple English vowels in steady state we have observed convergence of the first three formants to within a few hertz of those of human speech in fewer than 25 iterations. In addition, the corresponding positions of the articulators are physiologically reasonable. We are currently studying methods for controlling the articulatory dynamics of the synthesizer in order to imitate temporally varying utterances. Details of the adaptation algorithm and experimental results are given. Finally the relevance of this study to problems in speech recognition, synthesis, and transmission is outlined.

This content is only available via PDF.