Adults’ categorization of speech sounds is influenced by different aspects of lexical and phonological organization. These include neighborhood density and biphone probability. The role of both these influences in models of speech recognition has been a topic of debate. This is in part because these two effects are difficult to disambiguate given the measures are highly correlated: denser neighborhoods tend to have high biphone probability sequences. Accordingly, interactive models can account for neighborhood density effects based on lexical feedback, with biphone probability effects as their by-product. Conversely, in the absence of feedback from the lexicon, autonomous models can explain density effects using biphone probabilities alone. We present two experiments testing cases which disambiguate these effects. In Experiment 1, listeners categorized re-synthesized /ɛ/∼/æ/ continua. Vowel continua were presented in CVC frames, both endpoints of which were English non-words. Crucially, we manipulated the neighborhood density and biphone probabilities of each endpoint of the continuum independently. Our results show an independent contribution of neighborhood density and biphone probability in categorization. In Experiment 2, we are now using eye-tracking to disentangle the time course of each effect. Results will be discussed in the context of the role of feedback in speech recognition.