The present study examined the application of the articulation index (AI) as a predictor of the speech‐recognition performance of normal and hearing‐impaired listeners with and without hearing protection. The speech‐recognition scores of 12 normal and 12 hearing‐impaired subjects were measured for a wide range of conditions designed to be representative of those in the workplace. Conditions included testing in quiet, in two types of background noise (white versus speech spectrum), at three signal‐to‐noise ratios (+5, 0, −5 dB), and in three conditions of protection (unprotected, earplugs, earmuffs). The mean results for all 21 listening conditions and both groups of subjects were accurately described by the AI. Moreover, a single transfer‐function relating performance to the AI could describe all the data from both groups.
Skip Nav Destination
Article navigation
March 1990
March 01 1990
Application of the articulation index to the speech recognition of normal and impaired listeners wearing hearing protection
Graham Wilde;
Graham Wilde
Audiology Department, Medical Center, Fort Gordon, Augusta, Georgia 30905
Search for other works by this author on:
Larry E. Humes
Larry E. Humes
Department of Speech and Hearing Sciences, Indiana University, Bloomington, Indiana 47405
Search for other works by this author on:
J. Acoust. Soc. Am. 87, 1192–1199 (1990)
Article history
Received:
March 08 1989
Accepted:
October 23 1989
Citation
Graham Wilde, Larry E. Humes; Application of the articulation index to the speech recognition of normal and impaired listeners wearing hearing protection. J. Acoust. Soc. Am. 1 March 1990; 87 (3): 1192–1199. https://doi.org/10.1121/1.398793
Download citation file:
Pay-Per-View Access
$40.00
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Citing articles via
Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languages
Maïa Ponsonnet, Christophe Coupé, et al.
The alveolar trill is perceived as jagged/rough by speakers of different languages
Aleksandra Ćwiek, Rémi Anselme, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Comparison of frequency selectivity and consonant recognition among hearing‐impaired and masked normal‐hearing listeners
J Acoust Soc Am (April 1992)
Auditory filter characteristics and consonant recognition for hearing‐impaired listeners
J Acoust Soc Am (April 1989)
Effect of spectral envelope smearing on speech reception. II
J Acoust Soc Am (March 1993)
Articulation index predictions of contextually dependent words
J Acoust Soc Am (July 1986)
Modeling sensorineural hearing loss. I. Model and retrospective evaluation
J Acoust Soc Am (January 1988)