Auditory presentation of low‐frequency speech components, and tactile or electrocochlear stimulation derived from speech, can improve the speechreading of speech segments. A method is described for predicting both the level of aided performance and the resulting error pattern from the confusion matrix for each modality separately. The method is based on a characterization of confusion matrices in terms of a multidimensional Thurstonian decision model that allows performance to be described in terms of sensitivity and bias. In the multimodal case, the decision space is assumed to be the product space of the decision spaces corresponding to the stimulation modes. In certain cases, multimodal sensitivity is roughly equal to the vector sum of the sensitivities for each stimulation mode, indicating that cues are integrated with little interference. [Work supported by NIH.]

This content is only available via PDF.