When learning to recognize words a new dialect, listeners rely on both acoustics (Norris, McQueen, & Cutler 2003) and extra-linguistic social cues (Kraljic, Brennan, & Samuel 2008). This study investigates how listeners use both acoustic and social information after exposure to a new dialect. American English (AmE) speaking listeners were trained to correctly identify the front vowels of New Zealand English (NZE). To an AmE speaker, these are highly confusable: “head” is often heard as “hid”. Listeners were then played 500 ms vowels produced by both AmE and NZE speakers. Half of the listeners were given correct information on the speakers' dialect, and half incorrect. Listeners' classifications of vowels were affected by what they were told about the speakers' dialect. Vowels labeled as a given dialect, correctly or not, were more likely to be classified as if they were from that dialect. There was also an effect of speakers' actual dialect. Overall, the AmE-speaking listeners were more accurate when identifying vowels from AmE than those from NZE; even with the very limited acoustic information available listeners are still sensitive to inter-dialectal differences. Any model of cross-dialect perception, then, must account for listener's use of both social and acoustic cues.