Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage “binding and fusion” model for audiovisual speech perception.
Skip Nav Destination
,
,
Article navigation
August 2012
August 08 2012
Binding and unbinding the auditory and visual streams in the McGurk effect Available to Purchase
Olha Nahorna;
Olha Nahorna
GIPSA-Lab, Speech and Cognition Department, UMR 5216, CNRS,
Grenoble University
, France
Search for other works by this author on:
Frédéric Berthommier;
Frédéric Berthommier
GIPSA-Lab, Speech and Cognition Department, UMR 5216, CNRS,
Grenoble University
, France
Search for other works by this author on:
Jean-Luc Schwartz
Jean-Luc Schwartz
a)
GIPSA-Lab, Speech and Cognition Department, UMR 5216, CNRS,
Grenoble University
, France
Search for other works by this author on:
Olha Nahorna
Frédéric Berthommier
Jean-Luc Schwartz
a)
GIPSA-Lab, Speech and Cognition Department, UMR 5216, CNRS,
Grenoble University
, France
a)
Author to whom correspondence should be addressed. Electronic mail: [email protected]
J. Acoust. Soc. Am. 132, 1061–1077 (2012)
Article history
Received:
May 16 2011
Accepted:
May 21 2012
Citation
Olha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz; Binding and unbinding the auditory and visual streams in the McGurk effect. J. Acoust. Soc. Am. 1 August 2012; 132 (2): 1061–1077. https://doi.org/10.1121/1.4728187
Download citation file:
Pay-Per-View Access
$40.00
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Citing articles via
Focality of sound source placement by higher (ninth) order ambisonics and perceptual effects of spectral reproduction errors
Nima Zargarnezhad, Bruno Mesquita, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Variation in global and intonational pitch settings among black and white speakers of Southern American English
Aini Li, Ruaridh Purse, et al.
Related Content
Audio-visual speech scene analysis: Characterization of the dynamics of unbinding and rebinding the McGurk effect
J. Acoust. Soc. Am. (January 2015)
Audiovisual speech perception: Moving beyond McGurk
J. Acoust. Soc. Am. (December 2022)
A reanalysis of McGurk data suggests that audiovisual fusion in speech perception is subject-dependent
J. Acoust. Soc. Am. (March 2010)
Audiovisual integration in the absence of a McGurk effect
J. Acoust. Soc. Am. (May 2002)
McGurk effect in non‐English listeners: Few visual effects for Japanese subjects hearing Japanese syllables of high auditory intelligibility
J. Acoust. Soc. Am. (October 1991)