Auditory and visual perceptual processes interact during the identification of speech sounds. Some evaluations of this interaction have utilized a comparison of performance on audio and audiovisual word recognition tasks. A measure derived from these data, R, can be used as an index of the perceptual gain due to multisensory stimulation relative to unimodal stimulation. Recent evidence has indicated that cross‐modal relationships between the acoustic and optical forms of speech stimuli exist. Furthermore, this cross‐modal information may be used by the perceptual mechanisms responsible for integrating disparate sensory signals. However, little is known about the ways in which acoustic and optic signals carry cross‐modal information. The present experiment manipulated the acoustic form of speech in systematic ways that selectively disrupted candidate sources of cross‐modal information in the acoustic signal. Participants were then asked to perform a simple word recognition task with the transformed words in either auditory‐alone or audiovisual presentation conditions. It was predicted that audiovisual gain would be relatively high for those transformations in which the relative spacing of formants was preserved but would be nonexistent for those transformations that destroy the relative spacing of formants. The results are discussed in terms of existing theories of audiovisual speech perception.
Skip Nav Destination
Article navigation
October 2008
Meeting abstract. No PDF available.
October 15 2008
Effects of acoustic transformation on cross‐modal speech information and audiovisual gain II.
James W. Dias;
James W. Dias
Dept. of Psych., California State Univ. Fresno, 2576 E. San Ramon ST11, Fresno, CA 93740
Search for other works by this author on:
Lorin Lachs
Lorin Lachs
Dept. of Psych., California State Univ. Fresno, 2576 E. San Ramon ST11, Fresno, CA 93740
Search for other works by this author on:
J. Acoust. Soc. Am. 124, 2458 (2008)
Citation
James W. Dias, Lorin Lachs; Effects of acoustic transformation on cross‐modal speech information and audiovisual gain II.. J. Acoust. Soc. Am. 1 October 2008; 124 (4_Supplement): 2458. https://doi.org/10.1121/1.4782654
Download citation file:
Citing articles via
All we know about anechoic chambers
Michael Vorländer
Day-to-day loudness assessments of indoor soundscapes: Exploring the impact of loudness indicators, person, and situation
Siegbert Versümer, Jochen Steffens, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Audiovisual speech perception: Moving beyond McGurk
J. Acoust. Soc. Am. (December 2022)
Developmental and linguistic factors of audiovisual speech perception across different masker types
J Acoust Soc Am (October 2014)
Specification of cross-modal source information in isolated kinematic displays of speech
J Acoust Soc Am (July 2004)
Acoustic noise and vision differentially warp the auditory categorization of speech
J. Acoust. Soc. Am. (July 2019)
Predicting children's audiovisual speech recognition thresholds based on unimodal performance
J Acoust Soc Am (October 2021)