Spatial audio reproduction has gained renewed interest in recent years with the increasing presence of virtual reality applications. Ambisonics is currently the most widely adopted format for spatial audio content distributed via internet streaming services, and this has raised the need for audio compression relevant to the format. Current perceptual audio coders used for stereo audio content rely heavily on masking thresholds to reduce data rates, but these thresholds do not take into consideration spatial release from masking. This study begins an effort to update these thresholds for spatially separated sources in ambisonics. The listening tests were performed with sounds encoded in ambisonics to allow for tests to integrate whatever inherent limitations exist in the format. Initial listening tests were carried out for a subset of possible conditions—sound sources were separated along the horizontal plane, with a specific set of separation angles between the masker and maskee. Suggestions are given for continuing the work for the full range of possible conditions.
Skip Nav Destination
Article navigation
September 2018
Meeting abstract. No PDF available.
September 01 2018
Adapting masking thresholds for spatially separated sounds in two dimensional ambisonics
Yuval Adler;
Yuval Adler
Ctr. for Comput. Res. in Music and Acoust., Stanford Univ., 660 Lomita Ct, Stanford, CA 94305, [email protected]
Search for other works by this author on:
Prateek Murgai
Prateek Murgai
Ctr. for Comput. Res. in Music and Acoust., Stanford Univ., 660 Lomita Ct, Stanford, CA 94305, [email protected]
Search for other works by this author on:
J. Acoust. Soc. Am. 144, 1861 (2018)
Citation
Yuval Adler, Prateek Murgai; Adapting masking thresholds for spatially separated sounds in two dimensional ambisonics. J. Acoust. Soc. Am. 1 September 2018; 144 (3_Supplement): 1861. https://doi.org/10.1121/1.5068183
Download citation file:
66
Views
Citing articles via
All we know about anechoic chambers
Michael Vorländer
Day-to-day loudness assessments of indoor soundscapes: Exploring the impact of loudness indicators, person, and situation
Siegbert Versümer, Jochen Steffens, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Perceptually motivated sound field synthesis for music presentation
J Acoust Soc Am (May 2017)
Directional emphasis in ambisonics
J Acoust Soc Am (March 2018)
Nearfield binaural synthesis and ambisonics
J. Acoust. Soc. Am. (March 2007)
Modal beamformer analysis of ambisonics decoders
J Acoust Soc Am (March 2018)
Sound source localization with various ambisonics orders in virtual reality
J. Acoust. Soc. Am. (October 2020)