Visual cues from the talker’s face improve speech perception because the talker adopts discrete facial configurations, known as visemes, corresponding to a limited number of possible phonemes in the auditory signal. Visual cues alone are insufficient for complete recovery of the speech signal, because individual visemes can occur with more than one phoneme. Visual speech cues may provide non-phonemic benefits in noise through segregation of target speech from background sounds. This experiment isolated phonemic and non-phonemic benefits of visual cues through identification of strings of consonants (i.e., aCHaBaGa) in an eight-alternative forced choice task. Listeners discriminated each consonant presented from a randomly selected foil in each position. In the heterovisemic case, the foil had a different viseme than the target. In the homovisemic case, each foil was from a list of consonants with the same viseme as the target, meaning the visual target provided no phonetic information about the speech. Results show benefit from visual speech cues in the homovisemic condition roughly half the size of that in the heterovisemic condition. This suggests a substantial portion of the visual benefit can be attributed to factors other than extraction of phonemic information from visemes in the visual stimulus.