Talkers of tonal language, such as Mandarin, use the acoustic cues of fundamental frequency (F0), amplitude, and duration to indicate lexical meaning as well as to express linguistic and emotional prosody. It has therefore been hypothesized that tonal language talkers have less prosodic “space” to signal emotional prosody using F0, compared to non-tonal language talkers, and that these differences in prosodic processing should be evident in speech perception and production tasks. In addition, for talkers of both tonal and non-tonal languages, speaking rate interacts with emotional expression, with “sad” mood generally expressed with slower speech, while “happy” and “angry” moods are marked with faster speech rates. Despite the overall importance of speaking rate in signaling particular emotional moods, few data exist for speakers of tonal languages, such as Mandarin, in which lexical tones are typically specified for length. These issues can be addressed by analyzing and modelling data for individuals with cochlear implants, electronic hearing systems that provide relatively good temporal resolution but poor spectral resolution. Findings from our laboratory will be used to address models of prosody perception and production.