Virtual acoustic environments are created by mimicking the pressures of real acoustic sources at a listener’s ears. In this way, a listener can perceive virtual acoustic sources as coming from any desired direction. There are multiple methods to accomplish this using headphones or loudspeaker arrays, but not all are practical for everyday use. Some use cases limit the use of headphones or the number and position of loudspeakers, such as automobiles and home theater systems. These limitations also put bounds on the performance of each possible system, such as allowable head movement and the accuracy of localization. An exhaustive-search optimizer was used to find the performance of various sparse loudspeaker arrays, given a limited number of possible loudspeaker positions in a free-field environment for a single stationary listener. The optimizer sought to maximize allowable head rotation and translation and to minimize localization errors. Arrays consisting of two or four loudspeakers were considered using a spherical head model to find the necessary head-related transfer functions and cross-talk cancellation was used to create the virtual acoustic environment. From the results, pareto fronts were created that can be used to find the best loudspeaker array configuration for given design constraints.
Skip Nav Destination
Article navigation
April 2022
Meeting abstract. No PDF available.
April 01 2022
Optimization of loudspeaker positions for virtual auditory displays in a free-field environment
Nathaniel Wells;
Nathaniel Wells
Brigham Young Univ., BYU Dept. of Phys. and Astronomy, N283 ESC, Provo, UT 84602nateswells@gmail.com
Search for other works by this author on:
Jonathan D. Blotter
Jonathan D. Blotter
Brigham Young Univ., Provo, UT
Search for other works by this author on:
J. Acoust. Soc. Am. 151, A251 (2022)
Citation
Nathaniel Wells, Scott D. Sommerfeldt, Jonathan D. Blotter; Optimization of loudspeaker positions for virtual auditory displays in a free-field environment. J. Acoust. Soc. Am. 1 April 2022; 151 (4_Supplement): A251. https://doi.org/10.1121/10.0011227
Download citation file:
43
Views
Citing articles via
Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languages
Maïa Ponsonnet, Christophe Coupé, et al.
The alveolar trill is perceived as jagged/rough by speakers of different languages
Aleksandra Ćwiek, Rémi Anselme, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Fidelity of three-dimensional-sound reproduction using a virtual auditory display
J Acoust Soc Am (January 2000)
Virtual auditory display validation using transaural techniques
J Acoust Soc Am (April 2015)
A virtual headphone based on wave field synthesis
J Acoust Soc Am (May 2008)
Comparison of auditory localization performance with various loudspeaker rendering systems
J Acoust Soc Am (October 2004)
Open‐loop dereverberation of multichannel room impulse responses
J Acoust Soc Am (April 2003)