After a brief historical review of articulatory models, a factor analysis of the lateral shapes of the vocal tract is described. In the analysis, the profiles are specified by the sum of a small number of linear components. Studies at CNET indicate that, by using an arbitrary factor analysis in connection with the principal component analysis, it is possible to derive a linear model in which each component can be interpreted in articulatory terms, such as jaw position, tongue‐body position, etc. The statistical model, therefore, can be regarded as an articulatory model. The analysis of radiocinematographic data exhibits that the same vowel can be produced differently depending on speakers and phonetic contents, suggesting a compensatory articulation. Such a model is also useful in investigating the articulatory‐acoustic relationships. For example, F1–F2 trajectories are calculated by varying a specific parameter value, while the remaining parameter values are fixed appropriate for a vowel. The results show that the different jaw positions can be compensated only by the tongue‐body position for front vowels, and only by the lip aperture for back vowels, to obtain a similar F1–F2 pattern. Finally, limitations of current articulatory models are discussed.
Skip Nav Destination
Article navigation
November 1988
August 13 2005
Improved articulatory models Free
Shinji Maeda
Shinji Maeda
Département Recherche en Communication par Parole, Centre National d'Etudes des Télécommunications (CNET), 22301 Lannion, France
Search for other works by this author on:
Shinji Maeda
Département Recherche en Communication par Parole, Centre National d'Etudes des Télécommunications (CNET), 22301 Lannion, France
J. Acoust. Soc. Am. 84, S146 (1988)
Citation
Shinji Maeda; Improved articulatory models. J. Acoust. Soc. Am. 1 November 1988; 84 (S1): S146. https://doi.org/10.1121/1.2025845
Download citation file:
Citing articles via
Focality of sound source placement by higher (ninth) order ambisonics and perceptual effects of spectral reproduction errors
Nima Zargarnezhad, Bruno Mesquita, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Variation in global and intonational pitch settings among black and white speakers of Southern American English
Aini Li, Ruaridh Purse, et al.
Related Content
Feature extraction of articulatory motion based on the speech production model
J. Acoust. Soc. Am. (August 2005)
Modeling speech production using dynamic gestural structures
J. Acoust. Soc. Am. (August 2005)
Imaging techniques of the tongue and vocal tract
J. Acoust. Soc. Am. (August 2005)
Objective and perceptual characterization of microphone arrays for virtual acoustics applications.
J. Acoust. Soc. Am. (April 1996)
On the use of a speech recognition system for the detection of mispronunciations
J. Acoust. Soc. Am. (May 1981)