This study compares classification methods applied to an acoustic repertoire of the Asian elephant (Elephas maximus). Recordings were made of captive elephants at the Oregon Zoo in Portland, OR and of domesticated elephants in Thailand. Acoustic and behavioral data were collected in a variety of social contexts and environmental noise conditions. Calls were classified using three methods. First, calls were classified manually using perceptual aural cues plus visual inspection of spectrograms for differentiation of fundamental frequency contour, tonality, and duration. Second, a set of 29 acoustic features was measured for nonoverlapping calls using the MATLAB‐based program Osprey, then principal component analysis was applied to reduce the feature set. A neural network was used for classification. Finally, hidden Markov models, commonly used for pattern recognition, were utilized to recognize call types using perceptually‐weighted cepstral features as input. All manual and automated classification methods agreed on structural distinction of six basic call types (trumpets, squeaks, squeals, roars, rumbles, and barks), with two call types (squeaks and squeals) being highly variable. Given the consistency of results among the classification methods across geographically and socially disparate subject groups, we believe automated call detection could successfully be applied to acoustic monitoring of Asian elephants.