Little is known about how to characterize normal variability in voice quality within and across utterances from normal speakers. Given a standard set of acoustic measures of voice, how similar are samples of 50 women’s voices? Fifty women, all native speakers of English, read 5 sentences twice on 3 days—30 sentences per speaker. The VoiceSauce analysis program estimated many acoustic parameters for the vowels and approximant consonants in each sentence, including F0, harmonic amplitude differences, harmonic-to-noise ratios, formant frequencies. Each sentence was then characterized by the mean and standard deviation of each measure. Linear discriminant analysis tested how well each speaker’s set of 30 sentences could be acoustically distinguished from all other speakers’ sentences. Initial work testing just 3 speakers from this sample found that the speakers could be completely discriminated (classified) by these measures, and largely discriminated by just 2 of them. Such a simple result is not expected for the larger sample of speakers. We will present results concerning how successfully speakers can be discriminated, how well different numbers of discriminant functions do, and which acoustic measures do the most work. Implications for recognition by listening will be discussed. [Work supported by NSF and NIH.]