A novel method of feature extraction for phoneme recognition in continuous speech is proposed that employs mutual information between acoustic features and phonemes. Various acoustic features are coded by the vector quantization (VQ) method and a method to discriminate phonemes by the effective combination of these VQ codes is developed. To construct an optimal algorithm for phoneme discrimination, entropy and mutual information, in addition to conditional probability, between phoneme labels and features are also taken into consideration. The effectiveness of each acoustic feature for describing the characteristics of the phoneme in a given environment is evaluated based on the mutual information. The LPC mel‐cepstrum. its pattern of temporal changes over frames, and power are used as acoustic features. Three experiments were conducted. The first was on the optimization of the frame labeling. The second was on the detection of the vowel using a sequence of frame labels. The third was on word discrimination. The effectiveness of the proposed method was verified by these experiments.

This content is only available via PDF.