Bioacousticians have argued that ecological feedback mechanisms contribute to shaping the acoustic signals of a variety of species and anthropogenic changes in soundscapes have been shown to generate modifications to the spectral envelope of bird songs. Several studies posit that part of the variation in sound structure across spoken human languages could likewise reflect adaptation to the local ecological conditions of their use. Specifically, environments in which higher frequencies are less faithfully transmitted (such as denser vegetation or higher ambient temperatures) may favor greater use of sounds characterized by lower frequencies. Such languages are viewed as “more sonorous.” This paper presents a variety of tests of this hypothesis. Data on segment inventories and syllable structure is taken from LAPSyD, a database on phonological patterns of a large worldwide sample of languages. Correlations are examined with measures of temperature, precipitation, vegetation, and geomorphology reflecting the mean values for the area in which each language is traditionally spoken. Major world languages, typically spoken across a range of environments, are excluded. Several comparisons show a correlation between ecological factors and the ratio of sonorant to obstruent segments in the languages examined offering support for the idea that acoustic adaptation applies to human languages.
Skip Nav Destination
Article navigation
September 2015
Meeting abstract. No PDF available.
September 01 2015
Human spoken language diversity and the acoustic adaptation hypothesis
Ian Maddieson;
Ian Maddieson
Dept. of Linguist, UNM, Univ. of New Mexico, MSC03-2130, Albuquerque, NM 87131-0001, [email protected]
Search for other works by this author on:
Christophe Coupé
Christophe Coupé
Laboratoire Dynamique de Langage-CNRS, Lyon, France
Search for other works by this author on:
J. Acoust. Soc. Am. 138, 1838 (2015)
Citation
Ian Maddieson, Christophe Coupé; Human spoken language diversity and the acoustic adaptation hypothesis. J. Acoust. Soc. Am. 1 September 2015; 138 (3_Supplement): 1838. https://doi.org/10.1121/1.4933848
Download citation file:
Citing articles via
Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languages
Maïa Ponsonnet, Christophe Coupé, et al.
The alveolar trill is perceived as jagged/rough by speakers of different languages
Aleksandra Ćwiek, Rémi Anselme, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Testing Bolinger's hypothesis of pre‐accentual lengthening
J Acoust Soc Am (August 2005)
A phonetic basis for the sonority of [X]
J Acoust Soc Am (April 2014)
Sonority in British English
J Acoust Soc Am (May 2013)
Non-modal sonorants in Hakha Lai
J. Acoust. Soc. Am. (October 2020)
The aquisition of sonorant voicing.
J Acoust Soc Am (April 2009)