In this paper we will consider some effects of auditory‐nerve nonlinearities on the representation of complex stimuli such as vowels. At low sound levels, profiles of discharge rate versus characteristic frequency in populations of auditory‐nerve fibers show well‐defined peaks at frequencies corresponding to the formants of a vowel stimulus. At levels above about 60 dB SPL, these peaks are not seen, primarily because of effects of rate saturation. In addition, effects related to two‐tone suppression act strongly on units with characteristic frequencies in the vicinity of the second and third formants, suppressing their responses at high levels and contributing to the loss of distinct peaks in this region. Units with spontaneous rates less than about 1/s show the effects of suppression more dramatically than do higher spontaneous units; despite the suppression, however, this population retains formant peaks in its rate profiles up to levels at least 20 dB higher than does the higher spontaneous rate population. The temporal patterns of response of auditory‐nerve fibers contain considerable information about the spectrum of a steady state vowel. In general, phaselocking to the formant frequencies is stronger than locking to other harmonics of the stimulus. As sound level increases, synchrony to the formant frequencies saturates. However, responses to nonformant harmonics are suppressed by responses to the formant harmonics. This synchrony suppression allows the phaselocking to the formant frequencies to remain dominant even at high levels. The exception to this behavior is locking to harmonics and intermodulation products of the first two formant frequencies (2F1, 3F1, F2−F1, etc.) which increases rapidly in amplitude at higher sound levels. These distortion products are, to some extent, a consequence of the rectification inherent in hair cell/nerve fiber transduction and may also reflect the presence of propagating combination tones. [Supported by grants from the National Institute of Neurological and Communicative Disorders and Stroke.]
Skip Nav Destination
Article navigation
June 1979
August 11 2005
Effects of nonlinearities on speech encoding in the auditory nerve
M. B. Sachs;
M. B. Sachs
Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, M D 21205
Search for other works by this author on:
E. D. Young
E. D. Young
Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, M D 21205
Search for other works by this author on:
J. Acoust. Soc. Am. 65, S102 (1979)
Citation
M. B. Sachs, E. D. Young; Effects of nonlinearities on speech encoding in the auditory nerve. J. Acoust. Soc. Am. 1 June 1979; 65 (S1): S102. https://doi.org/10.1121/1.2016903
Download citation file:
29
Views
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Variation in global and intonational pitch settings among black and white speakers of Southern American English
Aini Li, Ruaridh Purse, et al.
Related Content
Representation of speech‐like sounds in the discharge patterns of auditory‐nerve fibers
J Acoust Soc Am (August 2005)
Acoustic correlates of some phonetic categories
J Acoust Soc Am (August 2005)
Processing of speech by the auditory nervous system (an overview)
J Acoust Soc Am (August 2005)
The representation of steady‐state vowel sounds in the temporal discharge patterns of the guinea pig cochlear nerve and primarylike cochlear nucleus neurons
J Acoust Soc Am (January 1986)
Time‐frequency receptive fields for neurons of the cochlear nuclei
J Acoust Soc Am (August 2005)