Virtual reality (VR) requires rendering accurate head-related transfer functions (HRTF) to ensure a realistic and immersive virtual auditory space. An HRTF characterizes how each ear receives sound from a certain location in space based on the shape of the head, torso, and pinnae, and provides a unique head-related impulse response (HRIR) for each given source location. Since HRTFs are person-specific and difficult to measure, recent research has utilized pre-existing HRTF databases and anthropometric measurements to generate personalized HRTFs with machine learning algorithms. This study investigates a personalization method that estimates the shape of each ear’s HRIR and interaural time differences (ITD) between the two ears in separate models. In the proposed method, the shape of the HRIR is estimated with an artificial neural network (ANN) trained with time-aligned HRIRs from the CIPIC database, eliminating between-subject timing differences. A regression tree is used to estimate the ITDs, which are integer sample delays between the left and right ears. A localization test with a VR headset was conducted to evaluate the perceptual accuracy of the personalized HRTFs. Subjects completed the test with both a pre-selected average HRTF and their personalized HRTF to compare localization errors between the two conditions.
Meeting abstract. No PDF available.
Personalizing head-related transfer functions using anthropometric measurements by combining two machine-learning models
MingYang Lee, Martin S. Lawless, Melody Baglione; Personalizing head-related transfer functions using anthropometric measurements by combining two machine-learning models. J. Acoust. Soc. Am. 1 March 2019; 145 (3_Supplement): 1883–1884. https://doi.org/10.1121/1.5101823
Download citation file: