Improving the validity of speech-recognition models requires an understanding of the conditions in which speech is experienced in everyday life. Listening conditions leading to a degradation of the signal—noise, competing talkers, disordered speech—have received most of the attention in that literature. But what about adverse conditions that do not alter the integrity of the signal, such as listening to speech under a non-auditory cognitive load (CL)? Drawing upon a variety of behavioral methods, this presentation investigates the effects of a concurrent attentional or mnemonic task on the relative reliance on acoustic cues and lexical knowledge during speech-perception tasks. The results show that listeners under CL downplay the contribution of acoustic detail and increase their reliance on lexical-semantic knowledge. However, greater reliance on lexical-semantic knowledge under CL is a cascaded effect of impoverished phonetic processing, not a direct consequence of CL. Ways of integrating CL into the functional architecture of existing speech-recognition models are discussed.