Studies of language learning in adulthood show that learners’ native language phonologies shape their non-native perception and production abilities. Nonetheless, listeners are able to learn to perceive new speech sound contrasts given training. The present study attempts to model how foreign consonants are perceived prior to training or second language study. Brain responses were recorded during passive listening using electroencephalography (EEG), using twelve monolingual English-native listeners and isolated consonant-vowel (CV) syllables read by speakers of several languages (Dutch, English, Hungarian, Hindi, Swahili). To model native-language phonology, EEG responses to native (English) syllables were used to train classifiers based on phonological feature labels for the consonants in each syllable. The trained classifiers were then applied to the EEG responses to foreign syllables, and the classifier outputs used to generate confusion probabilities between each pair of foreign and native consonants for each of the four foreign languages. Confusion matrices based on the EEG classifiers compare favorably to confusion matrices based solely on number of phonological feature mismatches. Ongoing work investigates whether optimal phonological features can be learned from the EEG data using clustering algorithms.