Responses of single auditory‐nerve fibers in anesthetized cat to spoken nasal consonant–vowel syllables were recorded. Analyses in the form of spectrograms and of three‐dimensional spatial‐time and spatial‐frequency plots were made. Among other features, formant transitions are clearly represented in the fibers’ response synchronization properties. During vocalic segments, especially those in /mu/ and /ma/, at a stimulus level near 75 dB SPL, a strong dominance in the responses by frequencies near the second formant (F2) is found for most fibers whose characteristic frequencies (CFs) are at or above F2. In contrast, at more moderate levels, the same fibers may show response synchrony to frequencies closer to their own CFs. There are significant differences in the response properties of high and low/medium‐spontaneous‐rate fibers.
Skip Nav Destination
Article navigation
December 1987
December 01 1987
Responses of auditory‐nerve fibers to nasal consonant–vowel syllables
Li Deng;
Li Deng
Department of Neurophysiology and Department of Electrical and Computer Engineering, University of Wisconsin—Madison, Madison, Wisconsin 53706
Search for other works by this author on:
C. Daniel Geisler
C. Daniel Geisler
Department of Neurophysiology and Department of Electrical and Computer Engineering, University of Wisconsin—Madison, Madison, Wisconsin 53706
Search for other works by this author on:
J. Acoust. Soc. Am. 82, 1977–1988 (1987)
Article history
Received:
December 16 1986
Accepted:
September 02 1987
Citation
Li Deng, C. Daniel Geisler; Responses of auditory‐nerve fibers to nasal consonant–vowel syllables. J. Acoust. Soc. Am. 1 December 1987; 82 (6): 1977–1988. https://doi.org/10.1121/1.395642
Download citation file:
Pay-Per-View Access
$40.00
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Rapid detection of fish calls within diverse coral reef soundscapes using a convolutional neural network
Seth McCammon, Nathan Formel, et al.