This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal—the AV advantage—has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.
Skip Nav Destination
Article navigation
March 2015
March 01 2015
Visual speech information: A help or hindrance in perceptual processing of dysarthric speech
Stephanie A. Borrie
Stephanie A. Borrie
a)
Department of Communicative Disorders and Deaf Education,
Utah State University
, Logan, Utah 84322
Search for other works by this author on:
a)
Author to whom correspondence should be addressed. Electronic mail: [email protected]
J. Acoust. Soc. Am. 137, 1473–1480 (2015)
Article history
Received:
June 02 2014
Accepted:
January 21 2015
Citation
Stephanie A. Borrie; Visual speech information: A help or hindrance in perceptual processing of dysarthric speech. J. Acoust. Soc. Am. 1 March 2015; 137 (3): 1473–1480. https://doi.org/10.1121/1.4913770
Download citation file:
Pay-Per-View Access
$40.00
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Citing articles via
All we know about anechoic chambers
Michael Vorländer
Day-to-day loudness assessments of indoor soundscapes: Exploring the impact of loudness indicators, person, and situation
Siegbert Versümer, Jochen Steffens, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
A relationship between processing speech in noise and dysarthric speech
J. Acoust. Soc. Am. (June 2017)
Vocabulary influences older and younger listeners' processing of dysarthric speech
J. Acoust. Soc. Am. (August 2013)
The role of linguistic and indexical information in improved recognition of dysarthric speech
J. Acoust. Soc. Am. (January 2013)
Fundamental frequency contours produced by normal and dysarthric speakers
J Acoust Soc Am (August 2005)
Combining degradations: The effect of background noise on intelligibility of disordered speech
J. Acoust. Soc. Am. (January 2018)