We investigated normal‐hearing listeners' ability to adapt to a speech signal that simulates aspects of the stimulation received from a cochlear implant. The training materials were spoken sentences which were spectrally degraded by noiseband‐vocoding and shifted upwards in frequency. A control condition consisted of spectrally inverted, unintelligible versions of these stimuli. Participants listened passively to these sentences, and on each trial received visual feedback which could be either a written version of the sentence or a video of the talker who originally produced it. Learning under both feedback conditions was relatively fast: Subjects improved on average by 25% on keyword recognition scores after 100 trials. No learning effects were observed in the control condition. We further used functional magnetic resonance imaging to investigate which cortical areas may be recruited during learning. A comparison of degraded and learnable sentences with the unlearnable control stimuli showed activity in the left superior temporal sulcus both during passive listening and receiving feedback. In contrast, the left inferior frontal gyrus was activated only when subjects were receiving feedback in the learnable condition. These results suggest that the inferior frontal gyrus plays an important role in integrating acoustic‐phonetic processing with externally provided feedback.
Meeting abstract. No PDF available.
Cross‐modal perceptual learning of spectrally degraded speech: Behavioral and neuroimaging studies
Frank Eisner, Carolyn McGettigan, Stuart Rosen, Andrew Faulkner, Sophie K. Scott; Cross‐modal perceptual learning of spectrally degraded speech: Behavioral and neuroimaging studies. J. Acoust. Soc. Am. 1 May 2008; 123 (5_Supplement): 3331. https://doi.org/10.1121/1.2933845
Download citation file: