Computational models of sound segregation typically include the assumption that pitch plays a key role in timbre identification. This hypothesis was investigated by presenting listeners with short segments of static vowel sounds and asking them to identify the vowel quality (timbre), the octave (tone height), or the note (tone chroma) of the sound. There were four vowel categories (/a/, /i/, /u/, and /eh/), four octave categories (centered on C1, C2, C3, and C4) and four note categories (C, D, E, and F), and performance was measured as a function of the number of glottal periods of the vowel sound. The results show that at all stimulus durations, it was easiest to identify the vowel quality (mean 94% correct), followed by the octave (71%), and finally the note (52%). The results indicate that timbre can be extracted reliably from segments of vowels that are too short to support equivalent pitch judgments, be they note identification, or the less precise judgment of the octave of the sound. Thus it is unlikely that pitch plays a key role in timbre extraction, at least at short durations.

This content is only available via PDF.
You do not currently have access to this content.