This paper proposes an experimental setup for measuring the sound radiation of a quadrotor drone using a hemispherical microphone array. The measured sound field is decomposed into spherical harmonics, which enables the evaluation of the radiation pattern to non-probed positions. Additionally, the measurement setup allows the assessment of noise emission and psychoacoustic metrics at a wide range of angles. The obtained directivity patterns using a third-order spherical harmonic decomposition (SHD) are shown to exhibit low distortion with respect to the original measurements, therefore, validating the SHD as an adequate representation strategy. Furthermore, the noise emissions are evaluated, and the highest noise emission is observed in the 90° azimuth direction. The exterior spherical acoustic holography description is employed to evaluate psychoacoustic metrics at arbitrary far-field positions and validated on a reference microphone. The estimated psychoacoustic metrics are closely related to the target metrics, which allows for sound quality analysis at any point external to the drone.

Vertical takeoff and landing (VTOL) unmanned aerial systems (UASs), also known as drones, have been attracting great interest for many commercial and recreational applications such as point-to-point delivery, photography, and various monitoring activities. VTOL UASs have lower noise emission levels compared to conventional aircraft (Kapoor et al., 2021). However, an increase in the number of drones in operation for civil applications (i.e., low altitudes) will likely cause noise annoyance issues in dense urban centers (Schäffer et al., 2021). Currently, in the European Union, VTOL UAS noise emissions are not formally regulated. Nevertheless, the general requirement of noise measurement of lightweight and small multirotor systems is under development (ISO/CD 5305, 2022), and the matter has been given high priority to ensure safe, secure, and sustainable operation (European Union, 2018, 2020).

Due to their compact design, the noise emitted by drones has a distinct acoustic signature, induced mainly by the side-flow horizontal motion and rotor synchronization (Schäffer et al., 2021). The dominant noise source is generated by the rotating blades at the blade passing frequency (BPF)—a frequency component relating to the number of blades and the rotation speed—and their harmonic components. The tonal noise is often considered the cause of annoyance. On the other hand, the broadband noise is generated by the electric motor and by turbulences in the flow (Roger and Moreau, 2020). Additionally, the emitted sound and radiation pattern can be affected by scattering effects on the drone body (Jiang et al., 2019).

Drone noise emissions have been modeled both analytically (Roger and Moreau, 2020) and numerically (Thai et al., 2021; Jiang et al., 2019). These models are either simplified or require a high level of complexity and long computational time. Alternatively, controllable and reproducible measurements can provide useful insights and are essential to validate and improve the models. Besides that, the measured data can be directly used as input to the propagation and auralization models.

An experimental aeroacoustic study of a drone for different rotor blades was performed by Intaratep et al. (2016). Measurements were performed with a single microphone 1.51 m away from the center of the drone at a 50° elevation with respect to the drone's horizontal plane. Heutschi et al. (2020) synthesized drone noise using measurements of the radiation strength using five sensors at five elevations and using in-field recordings. The recordings were manipulated to simulate different maneuver conditions and changes in rotor speed and to include outdoor sound propagation effects. The synthesized audio was plausible but with some limitations such as flight dynamics and coupling effects between propellers, which were not taken into account. Bian et al. (2021) proposed a Gaussian beam tracing method that asymptotically approximates the wave equation to model the drone emission and propagation. The method allowed for generic source directivity patterns and a realistic urban environment.

The psychoacoustic metrics, such as loudness, sharpness, and tonality (ECMA-418–2, 2020; Zwicker and Fastl, 2013), aim at quantifying annoyance relative to the listener's position. Psychoacoustic metrics have been used to evaluate annoyance of UAS (Christian and Cabell, 2017; White et al., 2017), to optimize blade spacing design (Torija et al., 2021; Torija et al., 2022), and to quantify the BPF modulation (Baars et al., 2021).

A comprehensive literature review on drone noise emission and noise effects on humans can be found in Schäffer et al. (2021). As reported, most works on drone noise emission have been performed in laboratory conditions but with a limited aperture and number of sensors. Noise emission levels were investigated both as a function of takeoff mass and in terms of directivity. The latter is known to have a major influence and needs to be considered in the noise emission model. However, measurement procedures to capture the full directivity of noise emissions of drones are still scarce. Contrastingly, measurements to capture complex sources and represent them in the spherical harmonic domain have been performed for musical instruments (Pollow, 2014; Zotter, 2009).

This paper proposes an experimental protocol for measuring the noise emission and directivity of VTOL UASs. The aim is to improve the knowledge base on drone noise toward the development of a standard noise emission metric. This paper targets three main contributions. The first is the development of a hemispherical microphone array setup for measuring the directivity pattern of the sound pressure generated by a drone operating at a simulated constant thrust. Different from previous works, the noise source is captured simultaneously for different elevation and azimuth angles. The second is the investigation of the directivity pattern using the spherical harmonic representation. The third is the evaluation of spectral content and psychoacoustic metrics at any far-field position using the exterior spherical acoustic holography description. The methodology is presented in a general way and demonstrated using a quadrotor drone.

The paper is organized as follows. The measurement setup, theoretical background for the spherical harmonic description, and psychoacoustic metrics are described in Sec. II. Section III presents the main results and discussion.

The measurements were performed in a hemi-anechoic chamber with an acoustically hard ground as shown in Fig. 1. The room interior dimensions are 16×12.5×6 m (length × width × height) with fiberglass wedges covered by perforated aluminum panels. Additionally, a 2.5×2.5 m2 area under the UAS was covered by a foam material with approximately 0.07 m thickness to mitigate ground reflections. During measurements, the temperature of the room ranged from 20 °C to 23 °C. The speed of sound is here considered as c =343 m/s. The measured UAS is the MikroKopter MK EASY Quadro V3 (HiSystems GmbH, Moormorland, Germany) with a custom-made bottom frame. The UAS has four rotors in “+” configuration each with a two-blade propeller and weights in total 2.5 kg. The propeller blade radius is 0.1374 m, yielding a drone pitch of λ=2.18 (λ=D/d), where D is the diameter of the drone (i.e., distance between two opposite rotors) and d is the radius of the propeller.

FIG. 1.

(Color online) Hemispherical array setup and UAS with near field microphones and binaural head as the target receiver position.

FIG. 1.

(Color online) Hemispherical array setup and UAS with near field microphones and binaural head as the target receiver position.

Close modal

The UAS was rigidly attached to a heavy support stand, blocking all six degrees of freedom. The geometric center of the drone is at a height of 1.57 ± 0.01 m from the floor. To capture the radiated pattern, an 18 hemispherical microphone array was built with an average radius of 0.98 ± 0.10 m. The configuration of the microphones, as shown in Fig. 2, covers a hemispherical region with the center matching the drone geometric center. The microphones are 1/4 in. externally polarized integrated circuit piezoelectric microphones [GRAS (Holte, Denmark) 40-PH]. Microphone 1 was covered with a windscreen foam as it lies directly underneath the propellers. Figure 2 shows a schematic of the exact microphone positions and the symmetric counterpart used in the spherical harmonic decomposition. For instance, microphones 1, 2, and 3 are at 0° azimuth and approximately at 70°, 50°, and 20° elevation, respectively, with respect to the horizontal plane in the center of the drone. The numerical order of the microphone follows each arm of the array frame in a counterclockwise rotation starting from the bottom arm at 0° azimuth.

FIG. 2.

Microphone array geometry and the symmetric counterpart on the yz plane.

FIG. 2.

Microphone array geometry and the symmetric counterpart on the yz plane.

Close modal

Conventionally, psychoacoustic metrics are evaluated for a listener position in the far-field. A binaural head [Head Acoustics (Herzogenrath, Germany) HSU III.2] was placed at a distance of 4 m away from the drone geometric center at a height of 1.82 ± 0.01 m from the ear's horizontal plane to the floor as shown in Fig. 1. For the remaining analysis in this paper, the recordings are taken from the right ear and equalized to remove the head and torso effect from the sound field.

The microphone signals are sampled at 52.1 kHz. Additionally, a laser probe [KEYENCE (Osaka, Japan) FS-V31P] with a detection time of 33 μs was placed under one of the propeller blades to measure the rotational speed at run time. Measurements were performed in operational conditions by controlling the transmitter [Graupner (Kirchheim unter Teck, Germany) MC-20] throttle joystick. The transmitter allows small increments and stable control of the vertical thrust since the joystick is not spring-loaded. The radiation pattern can be determined for a fixed position with varying rotor speeds in rpm, which simulates a hovering condition for different payload capacities.

Rotor interaction plays a role in sound generation, especially in forward motion and descent maneuvers. For instance, forward motion occurs when the rotational speed of the front rotors is different from the rear rotors and the UAS is slightly tilted forward. In the descent maneuver, the noise generation is more complex as it features fluctuations in sound pressure levels (SPLs) due to unsteady airflow conditions (Tinney and Sirohi, 2018). In this paper, the rotor interaction from maneuvers is not accounted for in the measurement as the drone is at a fixed orientation. Additionally, the effect of ground reflections is minimized during measurements but not fully suppressed, especially at low frequencies.

This section deals with the representation of sound pressure directivity using a spherical harmonic expansion. The aim is to obtain a continuous representation of the drone's directivity from spatially distributed recordings of the drone's noise.

A spherical harmonics expansion of the sound pressure at a given wavenumber k=ω/c, with ω the circular frequency and c the speed of sound, can be written as (Williams and Mann, 2000)

(1)

where r is the distance from source center to an arbitrary point in space; γ is the pair (θ,ϕ), with θ the elevation angle (θ[π/2,π/2)) and ϕ the azimuth angle (ϕ[0,2π)), represented in Fig. 3; bn(kr) represent the mode strength functions (also known as radial functions); amn(k) are the spherical harmonic expansion coefficients; and Ymn(γ) are the spherical harmonic basis functions defined as

(2)

where Lmn(·) are the associated Legendre polynomials, and j=1. For sensors located on an open sphere, the radial function can be expressed as bn(kr)=hn(2)(kr), where hn(2)(kr) is the spherical Hankel function of the second kind, which represents an outgoing wave, assuming ejωt time dependency. Assuming a fixed sphere of radius R, the expansion coefficients are calculated as (Williams and Mann, 2000)

(3)

where dΩ=sinθdθdϕ and the symbol * indicates the complex conjugation operator. The pressure distribution on the sphere can be reconstructed to any angular coordinate for a given wavenumber K as

(4)

where γ=(θ,ϕ) are coordinates for the sampled sound pressure field and dΩ=sinθdθdϕ.

FIG. 3.

(Color online) Spherical coordinate system.

FIG. 3.

(Color online) Spherical coordinate system.

Close modal

In practice, the infinite summation is truncated at the Nth order, such that n=0,,N. This leads to a set of linear equations

(5)

where

is the matrix of spherical harmonics adopting the ambisonic channel number notation (Zotter and Frank, 2019) and evaluated at the sensors' angles with (N+1)2 columns and Q rows denoting the number of microphones that sample the sound pressure γi pair coordinates. The system is solved in the least squares sense, leading to the vector of expansion coefficients

(6)

where Y denotes the Moore–Penrose pseudo-inverse of Y, a is the vector [a0,0,a1,1,,aN,N]T with (N+1)2 entries, and p=[p(K,R,γ1),,p(K,R,γq),,p(K,R,γQ)]T with Q entries. Equation (6) is known as a model-based encoding (Politis and Gamper, 2017), where the sound field is represented by a series of spherical harmonics of order N on a surface of radius R.

Truncation of series in Eq. (4) introduces errors and limits the validity of the sound field representation in terms of wavelength. The limits are dependent on the expansion order N, the number of microphones Q, and how they are distributed on the sphere. An expansion such that NQ1 can represent sound fields containing frequencies up to fmax<Nc/(2πR) (Zotter and Frank, 2019), where · is the floor operator.

This section describes the forward propagation model for the exterior spherical acoustic holography problem. The reconstruction of the sound field using a spherical microphone array has been proposed (Fernandez-Grande, 2016; Williams and Takashima, 2010). However, different from these previous studies, the exterior problem in this paper consists of a source inside an open sphere, and the goal is to evaluate the psychoacoustic metrics at the reconstructed far-field position.

Figure 4 shows a schematic of the problem, where a source is enclosed by a sphere Γs of radius R. From the sampled sound pressure, p(K,R,γq)Γs, the sound field can be reconstructed at any point r exterior or on Γs. The reconstruction assumes free-field condition and that the source is fully enclosed by Γs. The forward reconstruction is given by (Williams and Mann, 2000)

(7)

where Pmn(k,r)=bn(kr)amn(k) is the spherical wave spectrum.

FIG. 4.

Forward reconstruction for the exterior spherical acoustic holography problem, where Γs is the sphere of measurement that encloses the source, Ωi is the interior region of validity, and Ωe is the exterior region of validity.

FIG. 4.

Forward reconstruction for the exterior spherical acoustic holography problem, where Γs is the sphere of measurement that encloses the source, Ωi is the interior region of validity, and Ωe is the exterior region of validity.

Close modal

Equation (7) can be introduced into Eq. (1) to obtain the propagated sound field at any position (r,γ).

Last, to compute the psychoacoustic metrics, the propagated sound field spectrum ought to be transformed to the time domain by the inverse Fourier transform as follows:

(8)

where [hn(2)(kr)/hn(2)(kR)]Ymn(γ) is defined as a set of elementary wave functions, and the frequency dependency of the wavenumber is implicit.

In practice, the discrete version of the inverse Fourier transform is performed on the truncated spherical harmonic expansion. Also, note that the maximum frequency of the reconstructed sound pressure depends on the radius of the reconstruction. Above the cut-off frequency, the sound field might suffer from the appearance of virtual sources due to spatial aliasing. In some situations, these virtual sources modify the reconstructed sound field and do not accurately represent the noise radiation to the listener.

The annoyance caused by drones can be influenced by loudness, tonality, sharpness, fluctuation strength, and roughness (Gwak et al., 2020). In this section, these psychoacoustic metrics are briefly presented.

Loudness, in sone, quantifies how the human hearing system perceives the amplitude of a signal. The higher the sone value, the louder a human listener will perceive the sound. Zwicker loudness is calculated according to ISO 532-1 (2017). Tonality describes the strength of tonal components in a given signal. It derives from a hearing model as described in ECMA-74 (2019) and is quantified in tonal units, t.u.HMS.

Sharpness, as defined in DIN 45692 (2009), quantifies the amount of high-frequency content in the sound and represents a normalized weighted loudness. This makes the metric loudness independent and a measure of annoyance by quantifying the contribution of the higher-frequency components in acum. The reference sound of 1 acum is a narrowband noise, one critical band wide, at a center frequency of 1 kHz having a level of 60 dB.

Fluctuation strength, in vacil, describes the subjective perception of slowly modulated sounds (Zwicker and Fastl, 2013). One vacil is the fluctuation strength produced by a 1000 Hz tone of 60 dB that is 100% amplitude modulated four times per second. This metric, in particular, was previously reported to be a good indicator for assessing the perception of turbulence effects in UAS contra-rotating propellers (Torija et al., 2021). Roughness, in asper, also deals with the perception of temporal variation of sounds. In contrast with fluctuation strength, roughness is triggered by rapid variations of sound up to 500 Hz. An increase in roughness is commonly perceived as aggressive and annoying without showing a difference in loudness. This metric is computed based on band-passed signals and the specific hearing model loudness as described in ECMA-418‐2 (2020).

In this section, the processing of the measurement data is presented. The spatial information, captured by the hemispherical microphone array, allows for the construction of the noise radiation pattern using the spherical harmonics representation. The noise emission is further investigated in the vertical plane for different rotor speeds. Additionally, from the exterior spherical acoustic holography, the psychoacoustic metrics are evaluated in the far-field.

The microphone array (Fig. 2) captures the sound pressure data in the time domain. The data are transformed to the frequency domain and represented as a function of the wavenumber k. The radiation pattern is obtained by reconstructing the sound pressure field on a sphere with the same radius as the microphone array.

Assuming a perfectly distributed spherical microphone array with R =1 m, the upper frequency of fmax=163.77 Hz for a third-order spherical harmonic decomposition is obtained for an aliasing-free reconstruction. This upper-frequency limit is sufficiently accurate for the spherical harmonic reconstruction up to the first BPF tone for rotor speeds lower than 4900 rpm. For higher rpm and higher frequencies, spatial aliasing can occur on the radiation pattern in the form of contamination by higher-order spherical harmonic components.

Due to the limitations of the array configuration, only half the spherical domain is captured. To circumvent this, symmetry in the yz plane is assumed as shown in Fig. 2. This assumes that the front (+x axis) and rear (−x axis) propellers have identical sound radiation signatures for the frequency range valid for the spherical harmonic decomposition. The symmetry assumption plays a major role in the spherical harmonic decomposition and needs to be carefully considered.

Since the valid frequency range covers up to the first BPF for rotor speeds lower than 4900 rpm, it is fair to assume that the expansion of the sound field will represent reasonably the tonal contribution of the propeller's noise. Considering the drone's rotating-blade noise, the tonal components are attributed to the steady-state aerodynamics of the blades (Roger and Moreau, 2020). Since the drone was fixed and the power input to the motors is solely controlled by the pilot's input, without any additional controller (feedback controllers with gyroscopes for instance), it is reasonable to assume steady-state aerodynamics for the setting. Considering the latter, one can assume that the sound field generated exhibits axial symmetry with respect to the z axis (refer to Fig. 1).

To check the benefit of the hemispherical array, a comparison with a simpler configuration is performed. The simple configuration assumes an azimuth rotational symmetry where the sensors in the 0° azimuth are rotated every 60°. Hence, this configuration employs sensors 1, 2, 3, 10, 11, and 12, which are approximately at 0° azimuth. In total, both arrays have the same number of sampled microphones.

Let the reconstruction error, ϵ¯, be defined as the averaged error in space and frequency

(9)

where p(fi,γj) and pr(fi,γj) are the measured and reconstructed sound pressure field at the same radius R (omitted) and γj direction, respectively, and ||·|| is the Euclidian norm. The reconstruction is done considering a third-order spherical harmonics basis, and the employed frequency range of the analysis is around the valid frequency region of the spherical harmonic expansion (i.e., 20–300 Hz). Note that this interval is above the upper-frequency limit, and some errors due to spatial aliasing are expected, especially at the higher rpm values.

Figure 5 shows the averaged reconstruction error function with respect to rpm.

FIG. 5.

(Color online) Averaged error of the reconstructed source directivity considering two microphone array configurations with respect to rpm.

FIG. 5.

(Color online) Averaged error of the reconstructed source directivity considering two microphone array configurations with respect to rpm.

Close modal

It can be noticed that the error for the azimuth rotational symmetry configuration is around twice the error for the hemispherical symmetry configuration. Also, both the error and the relative difference between the two configurations increases with rpm. This indicates that the hemispherical array represents a considerable gain in accuracy when compared to the simpler configuration. Therefore, the results in the remainder of this section are obtained considering the hemispherical array.

Figure 6 shows the magnitude of the estimated directivity pattern for the real component of the sound pressure field considering a third-order spherical harmonics basis. The directivity is shown for the 4732, 4987, and 6152 rpm rotor speeds at their respective BPFs (158, 166, and 205 Hz, respectively). These rpm values are chosen since they represent a thrust capacity that allows the drone to takeoff and maintain flight level.

FIG. 6.

(Color online) Magnitude of the directivity pattern obtained from a third-order spherical harmonic representation for 4732 rpm (a), 4987 rpm (b), and 6152 rpm (c) at their respective first BPFs.

FIG. 6.

(Color online) Magnitude of the directivity pattern obtained from a third-order spherical harmonic representation for 4732 rpm (a), 4987 rpm (b), and 6152 rpm (c) at their respective first BPFs.

Close modal

The observed directivity patterns are symmetric with respect to the yz plane, by construction. It can also be noticed that the directivity patterns are fairly omnidirectional with interference regions of low noise emission. The null regions are attributed to the symmetry of the drone and captured by the superposition of spherical harmonics of similar magnitude and opposite phase. The highest SPLs are concentrated on the +y axis region (90° azimuth) in the three rpm cases. This indicates that the tonal component corresponding to the first BPF is predominantly generated by the propeller on the +y axis. Additionally, it can be observed that the overall magnitude of the SPL for the 4987 rpm case in Fig. 6(b) is smaller than a lower rpm case of Fig. 6(a). This can be attributed to lower energy levels in the BPF due to a second adjacent tonal component caused by differences in the rotor speed between propellers.

To better evaluate the spherical harmonic reconstruction, results at 0° and 63° azimuth angles are compared against the measured pressure points. The signals for each elevation are obtained from the microphones closest to 0° azimuth (i.e., microphones 1, 2, 3, 10, 11, and 12) and closest to 63° azimuth (i.e., microphones 1, 4, 7, and 10) according to Fig. 2. These particular directions are chosen because the 0° azimuth has the highest density of sampled microphones, and 63° azimuth is in the region of the highest SPLs. Note that the measured points do not correspond exactly to the selected azimuths and are shown just as a reference. Figure 7 shows the spectral SPL in polar coordinates with respect to elevation for three rotor speeds at their respective BPFs. The directivity pattern is derived from the third-order spherical harmonic decomposition.

FIG. 7.

(Color online) SPL in polar coordinates as a function of elevation at 0° and 63° azimuth. Solid line, derived from third order spherical harmonic reconstruction; solid circle, experimental data for 4732, 4987, and 6152 rpm at their respective BPFs.

FIG. 7.

(Color online) SPL in polar coordinates as a function of elevation at 0° and 63° azimuth. Solid line, derived from third order spherical harmonic reconstruction; solid circle, experimental data for 4732, 4987, and 6152 rpm at their respective BPFs.

Close modal

In general, a fairly good match can be observed for the analyzed frequencies and rotor speeds. Differences between the reconstructed spherical harmonic directivity and the measured spectrum can be observed at some points. This can be attributed to the imprecise position of the sampled microphones, the limited number of sensors, and the order truncation. In all polar plots, a null point can be observed closer to the 90° elevation. Besides that, the variation in SPL in terms of elevation can be as much 12 dB in some cases, which represents an expressive difference in noise radiation directivity. Note that, in the 63° azimuth cases, the number of sensors is limited due to the microphone array geometry. Nonetheless, the main lobes for the three rpm cases are also properly captured.

In summary, the hemispherical microphone array has been compared with a simple configuration and has been shown to have significantly higher accuracy. Additionally, the displayed radiation patterns are fairly omnidirectional with interference regions of low noise emission with the highest levels found at the +y axis. The observed directivity patterns are in good agreement with the sampled points, which indicates that the reconstruction is feasible with a third-order spherical harmonic basis and provides non-trivial information. Despite being above the upper-frequency limit, the spherical harmonic reconstruction for the 6152 rpm is in good agreement with measurements. Furthermore, to improve accuracy and increase the maximum allowed frequency for the analysis, more sensors are necessary.

This section presents the UAS noise emission levels at different elevation angles and rotor speeds. As observed in Sec. III A, noise emissions are the highest around the 90° azimuth. However, due to the array construction, the 0° azimuth has a higher microphone density for more elevation angles and is chosen for this analysis.

In previous works (Heutschi et al., 2020; Schäffer et al., 2021), the emission level has been investigated using the SPL at 30° elevation with respect to the propeller plane at 1 m distance in free-field conditions, denoted Lp,30°. Here, the SPL is computed considering the root-mean-squared (rms) values of the time signals, which have been bandpass filtered between 20 Hz and 20 kHz, in particular, to discard flow-induced low-frequency pseudo-sound. Figure 8 shows the SPL curves at 10°,30°,50°, and 70°, which are taken from microphones 12, 3, 2, and 1, respectively, according to Fig. 2.

FIG. 8.

(Color online) SPL for 10°,30°,50°, and 70° emission directivity across various rotor speeds.

FIG. 8.

(Color online) SPL for 10°,30°,50°, and 70° emission directivity across various rotor speeds.

Close modal

The increase in rotor speed is observed to induce an increase in SPL for all the probed elevation angles. At low rotor speeds, the Lp,30° is consistently the highest emission level, and the Lp,70° is the lowest. The Lp,10° suffers the highest variation, being one of the highest noise emissions at low rpm and decreasing considerably at higher rpm. The decrease in level for the Lp,10° can be understood as a change in the source directivity with the throttle. Additionally, the variation in SPL among the investigated elevations at each rotor speed can be observed. For instance, at 2600 rpm, the SPL levels vary less than 1 dB across elevations, whereas at 4094 rpm, the variation can be as high as 4 dB.

To better assess the differences in level, the SPL is computed in one-third octave bands for two selected rotor speeds as shown in Fig. 9. The one-third octave band has been chosen because it allows for easy identification of the frequency bands with higher energy when compared to the narrowband analysis and also in view of future standardization. Additionally, the one-third octave band analysis has been previously reported (Gwak et al., 2020; Heutschi et al., 2020; Schäffer et al., 2021), which is useful for comparing the different UASs with previous and future literature.

FIG. 9.

(Color online) SPL in one-third octave bands for 10°, 30°, 50°, and Lp,70° emission directivity. Top, 4987 rpm; bottom, 6152 rpm.

FIG. 9.

(Color online) SPL in one-third octave bands for 10°, 30°, 50°, and Lp,70° emission directivity. Top, 4987 rpm; bottom, 6152 rpm.

Close modal

The BPF coincides with the highest peaks between the one-third octave bands centered at 160–200 Hz for the 4987 rpm case and between 200 and 250 Hz for the 6152 rpm case. The BPF is approximately 10 dB higher than the shaft rate (i.e., first peak around 80 and 100 Hz for the 4987 and 6152 rpm case, respectively), with the exception of the 70° direction. This lower peak level is in agreement with the low emission level observed in Fig. 8. It is interesting to notice that at 6152 rpm, the emitted levels at 30° and 50° are very similar (see Fig. 8), but their spectral content is quite different (see Fig. 9), especially at lower and higher frequencies. At low frequencies, below 50 Hz, the 70° direction has much higher levels compared to the other directions, which can be attributed to the downward airflow across the microphone diaphragm. Similar behavior has been reported in (Hessler et al., 2008). At higher frequencies, it is possible to distinguish two trends among the four directions with higher SPL in the 10° and 30° directions and slightly lower in the 50° and 70° directions. This behavior can be explained by a change in the radiation directivity with respect to the elevation angle. This trend is consistent for both rotor speeds. It is worth highlighting that the Lp,70° was inferred from the microphone with a wind protector without any additional correction to the signal, which can account for high-frequency attenuation at this position.

In summary, the highest noise emission across all rotor speeds is found in the 30° direction for the 0° azimuth. This observation is in agreement with other previously published works on UASs with similar characteristics (Heutschi et al., 2020; Schäffer et al., 2021). However, as noticed in Sec. III A, the highest levels are seen close to the 90° azimuth. At this azimuth, the highest noise emission is around 50° elevation.

In many situations, SPLs may not reflect all features of the radiated noise, especially as perceived by human subjects. Therefore, this section aims at estimating the psychoacoustic metrics at far-field positions accounting for the drone noise directivity measured with the hemispherical microphone array. To this end, two analyses are presented. First, the forward propagation model, presented in Sec. II C, is validated. Then the psychoacoustic metrics, as described in Sec. II D, are estimated and compared with the target metrics at a reference (i.e., target) position. For this analysis, the target signal, measured by the right ear from the binaural head position, is employed. The time signals are bandpass filtered to around the valid frequency region of the spherical harmonic expansion (i.e., 20–300 Hz). The psychoacoustic metrics are computed from the time recordings using Simcenter Testlab Neo. The signals are assumed to be stationary per working condition (i.e., rpm), and the single value metrics are computed using the 90% percentile.

Figure 10 shows the spectrum of the far-field target and the forward propagated signal for two values of rotor speed. It can be noticed that the magnitude of both signals is in good agreement for both cases. At their respective BPFs [i.e., 166 Hz (a) and 205 Hz (b)], the propagated signal overestimates the energy level. This deviation can be attributed to the proximity of the BPF to the spatial aliasing frequency limit and due to a large attenuation in the target signal due to the binaural recording and the applied equalization filter. In the case of the phase, the observed deviations are due to mismatches in the phase unwrap and do not have a significant effect.

FIG. 10.

(Color online) Magnitude and unwrapped phase of the sound pressure for the forward propagated signal compared to the target signal for 4987 rpm (a) and 6152 rpm (b) from the right ear binaural head.

FIG. 10.

(Color online) Magnitude and unwrapped phase of the sound pressure for the forward propagated signal compared to the target signal for 4987 rpm (a) and 6152 rpm (b) from the right ear binaural head.

Close modal

Table I shows the resulting psychoacoustic metrics computed for the target and propagated (i.e., reconstructed) signals. As a baseline reference, the overall sound pressure level (OSPL) is also presented in the table. The reconstructed signal in the 4987 rpm case has an OSPL difference of 2.5 dB, and no difference in the 6152 rpm is observed.

TABLE I.

Psychoacoustic metrics at the far-field position from bandpass filtered target signal and the forward propagated model.

4987 rpm6152 rpm
TargetReconstructedTargetReconstructed
OSPL [dB(A)] 50.1 52.6 60.0 60.0 
Loudness (sone) 3.86 4.49 6.97 7.02 
Tonality (t.u.HMS1.05 0.69 1.42 0.88 
Roughness (asper) 0.12 0.23 0.15 0.25 
Fluctuation strength (vacil) 0.10 0.08 0.09 0.02 
Sharpness (acum) 0.40 0.42 0.40 0.48 
4987 rpm6152 rpm
TargetReconstructedTargetReconstructed
OSPL [dB(A)] 50.1 52.6 60.0 60.0 
Loudness (sone) 3.86 4.49 6.97 7.02 
Tonality (t.u.HMS1.05 0.69 1.42 0.88 
Roughness (asper) 0.12 0.23 0.15 0.25 
Fluctuation strength (vacil) 0.10 0.08 0.09 0.02 
Sharpness (acum) 0.40 0.42 0.40 0.48 

Regarding the psychoacoustic metrics, the results are in good agreement between the target and propagated signal. Loudness is slightly overestimated for the propagated signal with an absolute difference below 0.15 sone for both rpm values. Fluctuation strength shows very small but consistent results for both target and propagated signals. Sharpness shows no variation in the 4987 rpm and a small deviation of 0.05 acum in the 6152 rpm, which is an expected result since the analysis is restricted to a low-frequency region. However, notable differences are observed in tonality and roughness between the measured and reconstructed signal. Tonality is underestimated by 0.36 and 0.54 t.u.HMS for the 4987 and 6152 rpm, respectively. Roughness is overestimated by 0.13 and 0.28 asper for the 4987 and 6152 rpm, respectively. Despite the low values, this represents a doubling of the target roughness, and overall, these variations can lead to differences in the auralized signal.

Figure 11 shows the loudness and roughness curves in the Bark scale. The curves reveal that the propagated signal is similar to the target loudness curve at the valid critical bands (i.e., below third Bark band). However, despite having similar single values, the peak loudness is underestimated by the reconstructed signal. This indicates that the available energy is spread out to adjacent Bark bands, which in turn makes the reconstructed signal less tonal. This is in agreement with the tonality curves and the result of Table I. In the case of roughness, the results in Fig. 11(a) show good agreement for the valid Bark bands.

FIG. 11.

(Color online) Loudness, roughness, and tonality in Bark scale for the forward propagated signal compared to the target signal for 4987 rpm (a) and 6152 rpm (b).

FIG. 11.

(Color online) Loudness, roughness, and tonality in Bark scale for the forward propagated signal compared to the target signal for 4987 rpm (a) and 6152 rpm (b).

Close modal

In summary, the psychoacoustic metrics computed using the reconstructed sound pressure field are compared with a reference signal. The single value results are estimated for two rotor speeds, and good consistency is observed. The obtained metrics are within an acceptable interval of confidence and show the capability of the proposed methodology. The restriction to the low frequencies is a current drawback of this analysis that is driven mainly by the limited number of sensors in the hemispherical microphone array. Furthermore, deviations are observed in some metrics (e.g., tonality), which requires further investigation. Nonetheless, the results are encouraging and demonstrate the benefit of the proposed methodology.

This paper proposes an experimental methodology to measure sound pressure from a VTOL UAS operating at constant thrust and simulated hovering condition. The measurement allows the evaluation of directivity information of the sound pressure by means of a spherical harmonics representation and in terms of psychoacoustic metrics.

The spherical harmonic decomposition is derived from hemispherical microphone array measurements by assuming a yz plane symmetry. It was observed that the directivity pattern changed considerably depending on the analyzed rotational speed of the propellers. As a verification, the obtained radiation patterns are compared with the measured spectral SPL in polar coordinates. The third-order spherical harmonic reconstruction is suitable to capture the radiation pattern of the drone and showed reasonable accuracy up to the first BPF. Additionally, the noise emission results with respect to elevation showed that the Lp,30° metric represents a meaningful emission level in constant throttle conditions in the 0° azimuth. Finally, the psychoacoustic metrics are computed in the far-field using the exterior spherical acoustic holography formulation. The estimated metrics are compared with the metrics estimated using measured data at a reference point and are within a tolerable range given the restricted frequency range of the analysis. These results demonstrate that the proposed methodology allows for the evaluation of the psychoacoustic metrics at any point external to the drone and can be further extended to other drones and to other relevant metrics (i.e., impulsiveness).

The current hemispherical microphone array has some shortcomings that need to be addressed. First, despite demonstrating the benefit of the hemispherical array compared to a simple azimuth rotational symmetry, the validity of the axis symmetry in the yz plane has not been proven in the present work. Second, the irregular distribution of microphones in the array does not provide enough sensors to validate the emissions at 90° azimuth, which was observed to have the highest SPLs. Additionally, the limited number of sensors constrains the allowed spherical harmonic order and the maximum frequency of validity. In future work, besides addressing these shortcomings, the methodology can be adapted to measure other operational conditions such as takeoff and forward flight through wind tunnel and open-jet testing as long as the microphone array is not directly on the open-jet inlet. An ISO standard on noise measurements for UAS is under development (ISO/CD 5305, 2022), and future work will aim at incorporating the requirements specified in the standard into the experimental protocol proposed in the present work. Additionally, the methodology can be used to perform in situ noise predictions and to couple with a real-time model (Lemmens et al., 2014) to auralize the drone fly-over in controllable and pre-defined trajectories for assessing noise impact on the environment using emission levels, psychoacoustic metrics, and listening tests. Finally, such a platform can be of use to generate training data for drone noise detection algorithms.

The research leading to these results has received funding from the Research Foundation Flanders (FWO Mandate SB 1S86520N) and from the European Research Council under the European Union's Horizon 2020 research and innovation program/ERC Consolidator Grant: SONORA (Grant No. 773268), PBNv2 (Grant No. 721615), and VRACE (Grant No. 812719). The authors wish to thank Sacha Morales for his support in the measurement campaign and Dr. Claudio Colangeli and Dr. Karl Janssens from Siemens Digital Industries Software for their support and valuable feedback. This paper reflects only the authors' views, and the Union is not liable for any use that may be made of the contained information.

1.
Baars
,
W. J.
,
Bullard
,
L.
, and
Mohamed
,
A.
(
2021
). “
Quantifying modulation in the acoustic field of a small-scale rotor using bispectral analysis
,” in
Proceedings of the AIAA Scitech 2021 Forum
, January 11–15 and 19–21.
2.
Bian
,
H.
,
Tan
,
Q.
,
Zhong
,
S.
, and
Zhang
,
X.
(
2021
). “
Assessment of UAM and drone noise impact on the environment based on virtual flights
,”
Aerosp. Sci. Technol.
118
,
106996
.
3.
Christian
,
A. W.
, and
Cabell
,
R.
(
2017
). “
Initial investigation into the psychoacoustic properties of small unmanned aerial system noise
,” in
Proceedings of the 23rd AIAA/CEAS Aeroacoustics Conference
, June 5–7, Denver, CO.
4.
DIN 45692
(
2009
). “
Measurement technique for the simulation of the auditory sensation of sharpness
,” Technical Report (
Deutsches Institut für Normung
,
Berlin
).
5.
ECMA-418-2
(
2020
). “
Psychoacoustic metrics for ITT equipment—Part 2 (models based on human perception)
” (
Ecma International
,
Geneva, Switzerland
).
6.
ECMA-74
(
2019
). “
Measurement of airborne noise emitted by information technology and telecommunications equipment—A standard for measuring and reporting noise emission
” (
Ecma International
,
Geneva, Switzerland
).
7.
European Union
(
2018
). “
Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91
” (European Union, Brussels, Belgium).
8.
European Union
(
2020
). “
Commission Delegated Regulation (EU) 2020/1058 of 27 April 2020 amending Delegated Regulation (EU) 2019/945 as regards the introduction of two new unmanned aircraft systems classes
” (European Union, Brussels, Belgium).
9.
Fernandez-Grande
,
E.
(
2016
). “
Sound field reconstruction using a spherical microphone array
,”
J. Acoust. Soc. Am.
139
(
3
),
1168
1178
.
10.
Gwak
,
D. Y.
,
Han
,
D.
, and
Lee
,
S.
(
2020
). “
Sound quality factors influencing annoyance from hovering UAV
,”
J. Sound Vib.
489
,
115651
.
11.
Hessler
,
G. F.
,
Hessler
,
D. M.
,
Brandstätt
,
P.
, and
Bay
,
K.
(
2008
). “
Experimental study to determine wind-induced noise and windscreen attenuation effects on microphone response for environmental wind turbine and other applications
,”
Noise Control Eng. J.
56
(
4
),
300
309
.
12.
Heutschi
,
K.
,
Ott
,
B.
,
Nussbaumer
,
T.
, and
Wellig
,
P.
(
2020
). “
Synthesis of real world drone signals based on lab recordings
,”
Acta Acust.
4
(
6
),
24
.
13.
Intaratep
,
N.
,
Alexander
,
W. N.
,
Devenport
,
W. J.
,
Grace
,
S. M.
, and
Dropkin
,
A.
(
2016
). “
Experimental study of quadcopter acoustics and performance at static thrust conditions
,” in
Proceedings of the 22nd AIAA/CEAS Aeroacoustics Conference
, May 30–June 1, Lyon, France.
14.
ISO 532-1
(
2017
). “
Acoustics—Methods for calculating loudness—Part 1: Zwicker method
” (
International Organization for Standardization
,
Geneva, Switzerland
).
15.
ISO/CD 5305
(
2022
). “
(Under development) General requirement of noise measurement of lightweight and small multirotor unmanned aircraft systems (UAS)
” (
International Organization for Standardization
,
Geneva, Switzerland
).
16.
Jiang
,
H.
,
Zhou
,
T.
,
Fattah
,
R. J.
,
Zhang
,
X.
, and
Huang
,
X.
(
2019
). “
Multi-rotor noise scattering by a drone fuselage
,” in
Proceedings of the 25th AIAA/CEAS Aeroacoustics Conference
, May 20–23, Delft, Netherlands.
17.
Kapoor
,
R.
,
Kloet
,
N.
,
Gardi
,
A.
,
Mohamed
,
A.
, and
Sabatini
,
R.
(
2021
). “
Sound propagation modelling for manned and unmanned aircraft noise assessment and mitigation: A review
,”
Atmosphere
12
(
11
),
1424
.
18.
Lemmens
,
Y. C.
,
Benoit
,
T.
,
De Roo
,
R.
, and
Verbeke
,
J.
(
2014
). “
Real-time simulation of an integrated electrical system of a UAV
,” in
Proceedings of the SAE 2014 Aerospace Systems and Technology Conference
, September 23–25, Cincinnati, OH, 2014–01–2169.
19.
Politis
,
A.
, and
Gamper
,
H.
(
2017
). “
Comparing modeled and measurement-based spherical harmonic encoding filters for spherical microphone arrays
,” in
Proceedings of the 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)
, October 15–18, New Paltz, NY, pp.
224
228
.
20.
Pollow
,
M.
(
2014
). “
Directivity patterns for room acoustical measurements and simulations
,” Ph.D. thesis,
RWTH Aachen University
,
Aachen, Germany
.
21.
Roger
,
M.
, and
Moreau
,
S.
(
2020
). “
Tonal-noise assessment of quadrotor-type UAV using source-mode expansions
,”
Acoustics
2
(
3
),
674
690
.
22.
Schäffer
,
B.
,
Pieren
,
R.
,
Heutschi
,
K.
,
Wunderli
,
J. M.
, and
Becker
,
S.
(
2021
). “
Drone noise emission characteristics and noise effects on humans—A systematic review
,”
Int. J. Environ. Res. Public Health
18
(
11
),
5940
.
23.
Thai
,
A. D.
,
De Paola
,
E.
,
Di Marco
,
A.
,
Stoica
,
L. G.
,
Camussi
,
R.
,
Tron
,
R.
, and
Grace
,
S. M.
(
2021
). “
Experimental and computational aeroacoustic investigation of small rotor interactions in hover
,”
Appl. Sci.
11
(
21
),
10016
.
24.
Tinney
,
C. E.
, and
Sirohi
,
J.
(
2018
). “
Multirotor drone noise at static thrust
,”
AIAA J.
56
(
7
),
2816
2826
.
25.
Torija
,
A. J.
,
Chaitanya
,
P.
, and
Li
,
Z.
(
2021
). “
Psychoacoustic analysis of contra-rotating propeller noise for unmanned aerial vehicles
,”
J. Acoust. Soc. Am.
149
(
2
),
835
846
.
26.
Torija
,
A. J.
,
Li
,
Z.
, and
Chaitanya
,
P.
(
2022
). “
Psychoacoustic modelling of rotor noise
,”
J. Acoust. Soc. Am.
151
(
3
),
1804
1815
.
27.
White
,
K.
,
Bronkhorst
,
A. W.
, and
Meeter
,
M.
(
2017
). “
Annoyance by transportation noise: The effects of source identity and tonal components
,”
J. Acoust. Soc. Am.
141
(
5
),
3137
3144
.
28.
Williams
,
E. G.
, and
Mann
,
J. A.
(
2000
).
Fourier Acoustics: Sound Radiation and Nearfield Acoustical Holography
(
Academic
,
San Diego
).
29.
Williams
,
E. G.
, and
Takashima
,
K.
(
2010
). “
Vector intensity reconstructions in a volume surrounding a rigid spherical microphone array
,”
J. Acoust. Soc. Am.
127
(
2
),
773
783
.
30.
Zotter
,
F.
(
2009
). “
Analysis and synthesis of sound-radiation with spherical arrays
,” Ph.D. thesis,
University of Music and Performing Arts
,
Graz, Austria
.
31.
Zotter
,
F.
, and
Frank
,
M.
(
2019
). “
Ambisonics: A practical 3D audio theory for recording
,” in
Studio Production, Sound Reinforcement, and Virtual Reality
(
Springer
,
Cham, Switzerland
).
32.
Zwicker
,
E.
, and
Fastl
,
H.
(
2013
).
Psychoacoustics: Facts and Models
(
Springer
,
New York
).