Binaural room responses are normally measured on a listening subject in a room. The measurements, however, rapidly change with the source and receiver position. In addition, the measurements taken in a room can only be used to simulate scenes of that environment. In this work, an efficient parameterization of the binaural room transfer function is proposed, which has separable representations for the direct-path component and the reverberation component, thus providing a flexible way to generate binaural room responses for different environments and listeners. In addition, this parameterization uses wave equation solutions as basis functions and is continuous in space.

Binaural recording and rendering refer specifically to recording and reproducing sounds at two ears, which has become one of the key technologies in virtual reality, augmented reality, and 3D rendering (Bimber and Raskar, 2005; Burdea and Coiffet, 2003). In such systems, not only the directional information, or the cues for localization, but also the acoustic environment, have to be accurately reproduced for a realistic perception of the sound in 3D space. The former is characterized by the individualized head-related transfer function (HRTF) while the latter is described by the room impulse response (RIR) or the room transfer function (RTF).

It is widely adopted to measure the binaural room impulse response (BRIR) on a listening subject or a human mannequin for the most natural auditory rendering. However, there are two particular issues associated with this method: (1) the measurements rapidly change with the source/receiver position and are discrete in space; (2) the measurements taken in a particular room can only be used to simulate scenes of that environment.

It is desirable to characterize the HRTF and room responses separately so that a more flexible method can be developed to form BRIR [or binaural RTF (BRTF)] for different environments and listeners. In the work of Menzer et al. (2011), the B-format impulse responses are used to generate the BRIR, where the HRTF corresponding to each direct sound part of the B-format RIR is applied with a linear combination of the reflections part. This method still requires a DOA (direction-of-arrival) estimation of the direct path. In the work of Kearney et al. (2009b), the higher-order Ambisonic scheme is adopted and the HRIRs corresponding to the virtual speaker positions are used to generate binaural signals. Interpolation of RIR is implemented to synthesize the room responses, where the dynamic time warping (DTW) method was developed to temporally align only the direct path and sparse early reflection parts from two adjacent measurements (Kearney et al., 2009a) with its tail (the diffuse decay part) synthesized using statistical time-frequency room models. This method requires the early reflections to be sparse, and thus cannot be used for BRIR synthesis in small rooms. In addition, this method is computationally expensive when more measurements are involved, such as measurements on 2D plane or 3D sphere. A time-frequency domain implementation of DTW was recently proposed to improve the quality and computational efficiency (Garcia-Gomez and Lopez, 2018).

In this work, we introduce an efficient parameterization of the binaural room transfer function, which has separable HRTF/RTF representations and is continuous in space. Following Zhang et al. (2010) and Samarasinghe et al. (2015), this parameterization uses the eigen-solutions of the acoustic wave equation, i.e., spatial harmonics, to decompose soundfields. We demonstrate the accuracy of the proposed parameterization by comparing it with synthetic and measured HRTFs in simulated rooms.

The binaural room transfer function is decomposed into the direct-path part and reverberation part. We then develop parameterization for each term as a weighted sum of 3D spatial basis functions. The model is continuous in space and in particular the direct-path part and reverberation part are separated thus provides a flexible way to generate the BRTF.

The HRTF, i.e., the direct path of the BRTF, is normally measured by placing the source at a variety of positions on a sphere of certain radius (Cheng and Wakefield, 2001). It is regarded as a function of the source position and is decomposed using spherical harmonics as follows (Evans et al., 1998):

HL(R)(x,k)n=0Nhm=nnβ̃nmL(R)(rx;k)Ynm(θx,ϕx),
(1)

where x={rx,θx,ϕx} represents the source position in 3D space with rx,θx,ϕx the distance, elevation, and azimuth, respectively. k = 2πf/c is the wave number, where f is the frequency and c is the speed of sound. Ynm(θ,ϕ) represents the spherical harmonics of order n and degree m. The notation (·)L(R) represents the left ear or right ear, which is omitted in the following equations for simplicity.

This model can be further extended based on wave propagation theory and using wave-equation solutions (Williams, 1999; Zhang et al., 2010), i.e.,

H(x,k)n=0Nhm=nnβnm(k)hn(krx)Ynm(θx,ϕx),
(2)

where hn(·) is the spherical Hankel function of the first kind to represent the radial part, and βnm(k) are the decomposed coefficients corresponding to an individualized HRTF. The original derivation in Zhang et al. (2010) is based on acoustic reciprocity principle, where the source position and receiver position (i.e., the listener's ear) are interchanged, and the scattering effects due to human head and body are modelled as secondary level sources. The primary source at the listener's ear and secondary level sources together constitute an effective source field that is included in a sphere and based on which the HRTF modal decomposition was developed.

In this work, we propose a new approach to interpret the HRTF model (2) so that its integration with the modal-domain RTF is straightforward. Here, we view human head and torso as a continuous aperture that feeds sound waves impinging on its surface to the two ears. Instead of modelling the exact shape of the head and torso we consider an equivalent spherical aperture of an appropriate radius ry with a continuous aperture function ρ(ŷ,k) at wavenumber k where ŷ=(θy,ϕy). Note that the equivalent (theoretical) aperture function encapsulates the scattering and diffraction by the head and torso. The response of this theoretical aperture to a source located at position x can be written as

H(x,k)=S2ρ(ŷ,k)eikxy4πxydϱ(ŷ)|ry=s,
(3)

i.e., the surface integral (sum) of the aperture function ρ(ŷ,k) multiplied by the signal received at point y on the aperture from the source at x. Note that, the radius of the equivalent spherical aperture is chosen as ry=s. An appropriate choice of s is given in the work of Zhang et al. (2010), which determines the bound on the spherical harmonics expansion order of the HRTF.

The aperture response ρ(ŷ,k) can be decomposed by spherical harmonics as

ρ(ŷ,k)=n=0m=nnγnm(k)Ynm(θy,ϕy).
(4)

The radiation pattern of the sound source is given by the Green's function, i.e.,

eikxy4πxy=14πn=0ik(2n+1)hn(krx)jn(kry)Pn(cos(x̂·ŷ))=n=0m=nnikhn(krx)Ynm(θx,ϕx)jn(kry)Ynm(θy,ϕy),
(5)

where jn(·) is the spherical Bessel function of order n and (·) represents the complex conjugate.

By substituting Eqs. (4) and (5) into Eq. (3) and based on the orthogonality property of spherical harmonics on a sphere, we have

H(x,k)=n=0m=nnikjn(ks)γnm(k)hn(krx)Ynm(θx,ϕx).
(6)

This is equivalent to Eq. (2) with

βnm(k)=ikjn(ks)γnm(k)
(7)

and the infinite summation in Eq. (6) truncated to a finite order Nh. The only difference is that Eq. (2) interprets the HRTF as an outgoing field due to a directional source, and Eq. (7) views the HRTF as a directional receiver with an aperture function given by ρ(ŷ,k). Both approaches are valid due to the acoustic reciprocity principle for directional transducers (Samarasinghe et al., 2017). Note that Eq. (7) is only for theoretical derivation not used in implementation. The decomposition coefficients βnm(k) are obtained from HRTF measurements decomposed using Eq. (2).

In this subsection, we use the modal decomposition method for region-to-region RTF parameterization to describe the reverberation part of the BRTF, i.e., the BRTF without the direct path.

Two spatial configurations of the source region and receiver region are considered in this work as shown in Fig. 1. In the source region ζ, an outgoing field due to a point source is written as

S(z,k)ν=0Nsμ=ννδνμ(k)hν(krz)Yνμ(θz,ϕz),
(8)

where δνμ(k)=ikjν(krxs)Yνμ(θxs,ϕxs) denotes the outgoing mode, xs={rxs,θxs,ϕxs}, and z={rz,θz,ϕz} denotes the source position and receiver position, respectively. Both xs and z are with respect to the origin of the source region Os.

Fig. 1.

(Color online) Spatial configuration of the source region ζ and receiver region η in (a) config1: concentric and (b) config2: non-overlapping arrangement.

Fig. 1.

(Color online) Spatial configuration of the source region ζ and receiver region η in (a) config1: concentric and (b) config2: non-overlapping arrangement.

Close modal

Using the region-to-region RTF concept (Samarasinghe et al., 2015), the reflected field in the receiver region η for each order (n, m) is

Pn,m(y|xs,k)ν=0Nsμ=ννδνμ(k)αnmνμ(k)jn(kry)Ynm(θy,ϕy),
(9)

where αnmνμ(k) represents room reverberation in the modal domain, i.e., room mode coupling between the source region ζ and a spatial point y{ry,θy,ϕy} within the receiver region η where the human head lies.

We then use the continuous receiving aperture response to obtain the received binaural signals, that is,

Bn,mref(xs,k)=S2ρ(ŷ,k)Pn,m(y|xs,k)dŷ|ry=sν=0Nsμ=ννδνμ(k)αnmνμ(k)γnm(k)jn(ks),
(10)

where the second line is obtained by substituting Eqs. (9) and (4) into Eq. (10). The integral of Eq. (10) at the fixed radius ry=s means that to apply the region-to-region RTF with HRTF, the receiver region is also bounded by s. The number of modes to represent the receiver region should be the same as that of the HRTF data, i.e., Nh. In total, (Nh+1)2×(Ns+1)2 coefficients αnmνμ(k) are required to describe the RTF between the Nsth order source region and the Nhth order receiver region.

Given an omni-directional point source and referring to Eq. (7), we have

Bn,mref(xs,k)ν=0Nsμ=ννikjν(krxs)Yνμ(θxs,ϕxs)αnmνμ(k)γnm(k)jn(ks)=ν=0Nsμ=ννβnm(k)αnmνμ(k)jν(krxs)Yνμ(θxs,ϕxs).
(11)

By including all the modes (n, m) of the receiver region, the reflected part of the BRTF is

Bref(xs,k)n=0Nhm=nnν=0Nsμ=ννβnm(k)αnmνμ(k)jν(krxs)Yνμ(θxs,ϕxs).
(12)

The binaural room transfer function model is parameterized as

B(x,k)n=0Nhm=nnβnm(k)hn(krx)Ynm(θx,ϕx)directpath+n=0Nhm=nnν=0Nsμ=ννβnm(k)αnmνμ(k)jν(krxs)Yνμ(θxs,ϕxs)reverberation,
(13)

where x denotes the source position with respect to the origin of the receiver region O, and xs denote the same source position with respect to the origin of the source region Os. For the two scenarios considered as shown in Fig. 1, the source region and receiver region are concentrically positioned or non-overlapping spaced. In the first case x=xs while in the second case they are different.

The above representation requires at least (Nh+1)2 coefficients of the form βnm(k) and (Nh+1)2×(Ns+1)2 coefficients of the form αnmνμ(k) to describe the BRTF between the Nsth order source region and the Nhth order receiver region. While Nh is determined by the HRTF data, the order Ns=ekrs/2 given the modeling frequency and the source region radius rs (Kennedy et al., 2007). The advantage of this formulation is that the modal coefficients of the HRTF and RTF, i.e., βnm(k) and αnmνμ(k), are separated, thus providing a flexible way to generate BRTF for different environments and listeners. In addition, this parameterization uses wave equation solutions as basis functions and is continuous in space.

In terms of calculating these coefficients, βnm(k) can be analytically simulated when the head is modeled as a rigid sphere (Duda and Martens, 1998) or obtained by decomposing HRTF measurements on a partial sphere according to Eq. (2). αnmνμ(k), which are used to characterize the RTF and thus obtained without human listener present, can also be measured in practice (Samarasinghe et al., 2015) or simulated for a rectangular room using the spherical harmonics based generalized image source method (Samarasinghe et al., 2018).

The proposed BRTF model is validated through simulation based experiments. We use two HRTF datasets. The first is the synthetic HRTF, which is generated from an ideal rigid sphere of radius 0.09 m, i.e., the spherical head model (Duda and Martens, 1998). The second is the HRTF measured on a KEMAR from the MIT media laboratory, which is a 512-tap impulse response with a sampling frequency of 44.1 kHz and has 750 measurement positions on a sphere of 1.4 m (Gardner and Martin, 1995). The room size is 5 × 7 × 2.5 m and the center of the room is defined as the origin. In our simulations, the spherical head or the KEMAR head is located at the center of the room. Based on the recent study on the perceptually-relevant features of an HRTF, a fourth-order spherical harmonics expansion is adopted in this work (Romigh et al., 2015).1 The source region is simulated as a spherical region of radius 0.25 m with two configurations investigated, i.e., concentrically (config1) or non-overlapping (config2) placed with the receiver region (or the human head) as shown in Fig. 1. The latter is centered at the point (2, 1, 0) with respect to the origin.

We evaluate the accuracy of the proposed model for BRTF reconstruction at 50 points uniformly distributed in the source region over a broadband frequency range of 10 kHz in different reverberation conditions. The performance measure is the mean square error (MSE) between the true BRTF and the reconstructed BRTF, i.e.,

ε(k)=10×log10B(x,k)B̂(x,k)2B(x,k)2.
(14)

The true BRTF is simulated using the image source method (Allen and Berkley, 1979) with each image source modelled as a point source and multiplied with the corresponding HRTF data [original or interpolated according to Eq. (2)]; while the reconstructed BRTF is generated using the proposed model (13), where βnm(k) is obtained from model decomposition of synthetic/measured HRTFs, and αnmνμ(k) is simulated using the spherical harmonics based generalized image source method (Samarasinghe et al., 2018).

Figure 2 shows the MSE of the reconstructed BRTF over 10 kHz frequency range using the synthetic HRTF and the measured MIT data. The room is simulated with an image order of 7 and the reverberation time T60 of 144.5 ms. It can be seen that the performances are roughly the same using these two data sets and for two spatial configurations of the source region. The average MSE for the measured HRTF is slightly larger than that of the synthetic HRTF.

Fig. 2.

(Color online) Reconstruction error of the synthetic HRTF and measured HRTF in a simulated room environment. Two spatial configurations of the source region and receiver region are considered, i.e., (i) config1: concentric and (ii) config2: non-overlapping arrangement, as shown in Fig. 1.

Fig. 2.

(Color online) Reconstruction error of the synthetic HRTF and measured HRTF in a simulated room environment. Two spatial configurations of the source region and receiver region are considered, i.e., (i) config1: concentric and (ii) config2: non-overlapping arrangement, as shown in Fig. 1.

Close modal

We further evaluate the model accuracy for BRTF reconstruction in different reverberation conditions. In this experiment, the reflection coefficient increases from 0.1 to 0.9, which corresponds to the image order increasing from 5 to 27 and the reverberation time T60 increasing from 100 to 570 ms. The source operates in a bandwidth of 10 kHz. Figure 3 shows the reconstruction error as a function of T60. The MSE increases with the reverberation time. This is due to the fact that as reverberation increases, image sources appear from a larger number of directions. The effect of higher-order spatial modes becomes more significant. Currently, we use a fourth-order spherical harmonics expansion to model the HRTF and receiver region. Higher order terms can be included to further improve model accuracy for highly reverberant environments. As given in the works of Kennedy et al. (2007) and Zhang et al. (2010), a theoretical bound is Nh=ekus/2, where ku is the upper limit of the wavenumber and s is the effective receiver region radius. However, using high expansion orders increases the simulation time especially for the spherical harmonics based generalized image source method to generate the BRTF in highly reverberant environment. In practice, using high expansion orders means that a larger number of loudspeakers and microphones are required to measure the RTF coefficients, which inevitably poses implementation difficulties. This is the limitation of the proposed research.

Fig. 3.

(Color online) Reconstruction error as a function of reverberation time T60 for a bandwidth of 10 kHz.

Fig. 3.

(Color online) Reconstruction error as a function of reverberation time T60 for a bandwidth of 10 kHz.

Close modal

An efficient parameterization of the binaural room transfer function has been proposed in this work. We decomposed the binaural room responses into the direct-path and reverberation components, which can be described separately by HRTF and RTF. We then developed parameterization of both terms as a weighted sum of 3D basis functions. The accuracy of the proposed parameterization was verified through simulation-based experiments by comparing with synthetic HRTF for a spherical head and measured HRTF on a KEMAR in simulated room environments.

This research was supported by the National Natural Science Foundation of China (NSFC) funding scheme under Project No. 61671380.

1

The higher order terms in spherical harmonics (SH) expansion of the HRTF are mainly caused by rapid phase changes at high frequencies while altering the interaural phase difference (IPD) at high frequencies is perceptually irrelevant. Pre-processing of HRTFs with phase modification has been shown to be capable to reduce the SH order for binaural rendering of Abmisonic signals (Schörkhuber et al., 2018)

1.
Allen
,
J. B.
, and
Berkley
,
D. A.
(
1979
). “
Image method for efficiently simulating small-room acoustics
,”
J. Acoust. Soc. Am.
65
(
4
),
943
950
.
2.
Bimber
,
O.
, and
Raskar
,
R.
(
2005
).
Spatial Augmented Reality: Merging Real and Virtual Worlds
(
CRC Press
,
Boca Raton, FL
).
3.
Burdea
,
G. C.
, and
Coiffet
,
P.
(
2003
).
Virtual Reality Technology
(
Wiley
,
New York
).
4.
Cheng
,
C. I.
, and
Wakefield
,
G. H.
(
2001
). “
Introduction to Head-Related Transfer Functions (HRTFs): Representations of HRTFs in time, frequency, and space
,”
J. Audio Eng. Soc.
49
(
4
),
231
249
.
5.
Duda
,
R. O.
, and
Martens
,
W. L.
(
1998
). “
Range dependence of the response of a spherical head model
,”
J. Acoust. Soc. Am.
104
(
5
),
3048
3058
.
6.
Evans
,
M. J.
,
Angus
,
J. A. S.
, and
Tew
,
A. I.
(
1998
). “
Analyzing head-related transfer function measurements using surface spherical harmonics
,”
J. Acoust. Soc. Am.
104
(
4
),
2400
2411
.
7.
Garcia-Gomez
,
V.
, and
Lopez
,
J. J.
(
2018
). “
Binaural room impulse response interpoaltion for multimedia real-time applications
,” in
Proceedings of 144th Convention of the Audio Engineering Society
,
Milan, Italy
, pp.
1
10
.
8.
Gardner
,
W. G.
, and
Martin
,
K. D.
(
1995
). “
HRTF measurements of a KEMAR
,”
J. Acoust. Soc. Am.
97
(
6
),
3907
3908
.
9.
Kearney
,
G.
,
Masterson
,
C.
,
Adams
,
S.
, and
Boland
,
F.
(
2009a
). “
Dynamic time warping for acoustic response interpolation: Possibilities and limitations
,” in
Proceedings of the 17th European Signal Processing Conference (EUSIPCO)
,
Glasgow, Scotland
, pp.
705
709
.
10.
Kearney
,
G.
,
Masterson
,
C.
,
Adams
,
S.
, and
Boland
,
F.
(
2009b
). “
Towards efficient binuaural room impulse response synthesis
,” in
Proceedings of the EAA Symposium on Auralization
,
Espoo, Finland
, pp.
1
6
.
11.
Kennedy
,
R. A.
,
Sadeghi
,
P.
,
Abhayapala
,
T. D.
, and
Jones
,
H. M.
(
2007
). “
Intrinsic limits of dimensionality and richness in random multipath fields
,”
IEEE Trans. Sign. Process.
55
(
6
),
2542
2556
.
12.
Menzer
,
F.
,
Faller
,
C.
, and
Lissek
,
H.
(
2011
). “
Obtaining binaural room impulse responses from B-format impulse responses using frequency-dependent coherence matching
,”
IEEE/ACM Trans. Audio Speech Lang. Process.
19
(
2
),
396
405
.
13.
Romigh
,
G. D.
,
Brungart
,
D. S.
,
Stern
,
R. M.
, and
Simpson
,
B. D.
(
2015
). “
Efficient real spherical harmonic representation of head-related transfer functions
,”
IEEE J. Sel. Topics Sign. Process.
9
(
5
),
921
930
.
14.
Samarasinghe
,
P. N.
,
Abhayapala
,
T. D.
, and
Kellermann
,
W.
(
2017
). “
Acoustic reciprocity: An extension to spherical harmonics domain
,”
J. Acoust. Soc. Am.
142
(
4
),
EL337
EL343
.
15.
Samarasinghe
,
P. N.
,
Abhayapala
,
T. D.
,
Lu
,
Y.
,
Chen
,
H.
, and
Dickins
,
G.
(
2018
). “
Spherical harmonics based generalized image source method for simulating room acoustics
,”
J. Acoust. Soc. Am.
144
(
3
),
1381
1391
.
16.
Samarasinghe
,
P. N.
,
Abhayapala
,
T. D.
,
Poletti
,
M.
, and
Betlehem
,
T.
(
2015
). “
An efficient parameterization of the room transfer function
,”
IEEE/ACM Trans. Audio Speech Lang. Process.
23
(
12
),
2217
2227
.
17.
Schörkhuber
,
C.
,
Zaunschirm
,
M.
, and
Höldrich
,
R.
(
2018
). “
Binaural rendering of ambisonic signals via magnitude least squares
,” in
Proceedings of DAGA 2018
,
Munich, Germany
.
18.
Williams
,
E. G.
(
1999
).
Fourier Acoustics: Sound Radiation and Nearfield Acoustical Holography
(
Academic Press
,
San Diego, CA)
.
19.
Zhang
,
W.
,
Abhayapala
,
T. D.
,
Kennedy
,
R. A.
, and
Duraiswami
,
R.
(
2010
). “
Insights into head-related transfer function: Spatial dimensionality and continuous representations
,”
J. Acoust. Soc. Am.
127
(
4
),
2347
2357
.