In this study, a new research method using psychoacoustic experiments and acoustic simulations is proposed for human echolocation research. A shape discrimination experiment was conducted for sighted people using pitch-converted virtual echoes from targets of dissimilar two-dimensional (2D) shapes. These echoes were simulated using a three-dimensional acoustic simulation based on a finite-difference time-domain method from Bossy, Talmat, and Laugier [(2004). J. Acoust. Soc. Am. 115, 2314–2324]. The experimental and simulation results suggest that the echo timbre and pitch determined based on the sound interference may be effective acoustic cues for 2D shape discrimination. The newly developed research method may lead to more efficient future studies of human echolocation.

Bats are able to navigate spaces by actively producing ultrasounds and then interpreting the echoes from objects for spatial and object recognition (Moss and Surlykke, 2010). Similarly, humans can sense their environments using echolocation (Kolarik et al., 2014). For example, blind echolocation experts can discriminate the angular positions and identify the shapes and sizes of various targets using echolocation with self-generated mouth clicks (Thaler and Goodale, 2016). Moreover, sighted people can discriminate the wall thickness and materials of hollow cylinders and spheres by listening to pitch-converted echoes recorded using a typical bottlenose dolphin click within the ultrasonic range (DeLong et al., 2007). Pitch-converted ultrasonic broadband echoes also enable sighted people to discriminate the edge contours and textures of three-dimensional (3D) shapes (Sumiya et al., 2019). These findings suggest that designing acoustic features of echolocation signals (e.g., frequency band and time-frequency structure) appropriately depending on the situation is effective for human echolocation. We aim to investigate human echolocation and propose effective and practical signal design and sensing strategies for human echolocation research.

However, to conduct such experiments, similar to previous human echolocation studies, it is necessary to physically prepare the actual targets and conduct additional acoustic measurements to evaluate the echo features. It takes additional time and effort to perform experiments using targets of dissimilar shapes and materials. Furthermore, researchers must avoid recording unwanted echoes, and various artifacts of the measurement system must be carefully corrected to acquire pure echo stimuli. In this study, guided by these acoustical and methodological points of view, we adopted an acoustic simulation using a 3D elastic finite-difference time-domain (FDTD) method used in previous studies (Padilla et al., 2006; Nagatani et al., 2008, 2009; Nagatani et al., 2017). Because any desired target shape and situation can be created by acoustic simulations, this method can be effective for the quantitative evaluation of human echolocation capabilities. Moreover, we can examine echo features based on simulated sound distributions.

Therefore, this study proposes a research method using acoustic simulations and psychoacoustic experiments for human echolocation research. We conducted a two-dimensional (2D) shape discrimination experiment for sighted people using pitch-converted virtual echoes created by convolving simulated echo impulse responses with an echolocation signal. The echo impulse responses were simulated using the 3D elastic FDTD method. In this paper, we also discuss the effective acoustic cues for 2D shape discrimination based on the experimental and simulation results.

The reliability of an FDTD simulation was investigated by comparing the sound velocities, envelopes, and attenuation characteristics of the simulated sounds with that of sounds recorded under actual environments in several studies (Padilla et al., 2006; Nagatani et al., 2008, 2009; Mizuno et al., 2011). Therefore, in this study, the 3D elastic FDTD method was used to calculate the echo impulse responses from virtual targets. The SimSonic3D software package (http://www.simsonic.fr/) (Bossy et al., 2004) was used for the 3D simulation. A previous FDTD study using SimSonic3D (Nagatani et al., 2017) also demonstrated that the propagation timing and attenuation owing to the sound diffusion obtained during the actual measurement and the FDTD simulation using SimSonic3D showed a good fit.

Four 2D-shaped virtual targets of dissimilar construction were prepared [Fig. 1(A); square, rectangle, triangle, and circle] for the simulations. Using audio editing software (Adobe Audition CC, Adobe, San Jose, CA), the determination of target size was such that the differences in echo loudness (ITU-RBS. 1770-3) were within ±1.5 LUFS (a difference of 1 LUFS corresponds to a difference of 1 dB) among the targets. As shown in Fig.1(A), all target shapes had surface areas of 100 cm2 in the XY plane, except the triangle, which recorded an area of 200 cm2. The target material was assumed to be acrylonitrile butadiene styrene (ABS) resin, while the prevailing medium outside of the target material was assumed to be air. The material properties used for the two media in the simulations are listed in the supplementary material.1 The spatial and time resolutions of the simulations were 0.4 mm and 60 ns, respectively, which satisfied the condition for 3D simulation stability.

Fig 1.

(A) Perspective views of four virtual targets (square, rectangle, triangle, and circle) used for the 3D FDTD simulation. The targets were assumed to be made of ABS resin in the simulation. The “×” marks indicate the centers of the targets in the 2D surfaces. (B) An example of the geometrical configuration of the 3D FDTD simulation (left panel, top view; right panel, side view). The targets shown in (A) were installed in the air (white areas) surrounded by a PML (light gray areas) to simulate the echo impulse responses. A transmitter (light blue cross-marks) and two receivers (light blue circles) correspond to the positions of the mouth and ears of the 1/7 scaled miniature dummy head.

Fig 1.

(A) Perspective views of four virtual targets (square, rectangle, triangle, and circle) used for the 3D FDTD simulation. The targets were assumed to be made of ABS resin in the simulation. The “×” marks indicate the centers of the targets in the 2D surfaces. (B) An example of the geometrical configuration of the 3D FDTD simulation (left panel, top view; right panel, side view). The targets shown in (A) were installed in the air (white areas) surrounded by a PML (light gray areas) to simulate the echo impulse responses. A transmitter (light blue cross-marks) and two receivers (light blue circles) correspond to the positions of the mouth and ears of the 1/7 scaled miniature dummy head.

Close modal

Figure 1(B) shows an example of the geometrical configuration of the 3D simulation. The transmitter was assumed to be a point sound source. The target was installed in the simulation space to match the X and Y coordinates between the point sound source [Fig. 1(B), light blue cross-marks] and the center of the target in the XY plane [Fig. 1(B), black cross-marks], and the echo from the target was obtained at two omnidirectional receivers [Fig. 1(B), light blue circles]. The distance between the virtual target and the point sound source corresponded to the 1/7 scaled distance. The positional relationship between the transmitter and two receivers corresponded to the mouth and ears of the 1/7 scaled miniature dummy head (MDH) used in our previous study (Sumiya et al., 2019). The total simulation space was surrounded by a perfectly matched layer (PML). The size of the simulation space was determined so that the distances from the target to the PML in the X, Y, and Z axes were the same among the targets [square, 16 (X) × 16 (Y) × 24.5 (Z) cm3; rectangle, 11(X) × 26 (Y) × 24.5 (Z) cm3; triangle, 26 (X) × 26 (Y) × 24.5 (Z) cm3; circle, 17.2 (X) × 17.2 (Y) × 24.5 (Z) cm3].

To obtain echo impulse responses from the targets, the sinc signal (signal duration, 0.1 ms) with a flat (±1 dB) frequency response of up to 75 kHz was transmitted from the point sound source towards the center of the target in the XY plane [Fig. 1(B)]. The echo impulse responses acquired at the left and right receivers were obtained from the whole impulse responses by removing the direct sounds from the transmitter. The echo impulse responses were convolved with the downward linear frequency modulated (FM) signal (signal duration, 1 ms; frequency band, 7–20 kHz) to create original virtual echoes. The original virtual echoes were 1/7 times pitch-converted to lower frequency sounds (signal duration, 7 ms; frequency band, 1–2.9 kHz) by sample-rate conversion using the audio editing software in order to present them to the participants in the subsequent psychoacoustic experiment.

The psychoacoustic experiment was conducted in a sound-attenuated chamber [2.3 m (H) × 1.6 m (L) × 1.4 m (W)] at Doshisha University. Because the main purpose of this study was to develop a new research method using acoustic simulations and psychoacoustic experiments, we conducted an experiment for sighted people who had no experience with echolocation similar to our previous study (Sumiya et al., 2019). Ten sighted people (four males) aged from 21 to 23 years [mean ± standard deviation (SD); 22 ± 0.5] participated in the experiment. It was confirmed that the hearing levels of all participants were within 30 dB in the 0.25 to 3 kHz range based on a standard pure-tone audiometry using an audiometer (AA-77A, RION, Tokyo, Japan). The psychoacoustic experiment using humans was approved by the ethics committee of Doshisha University. After obtaining informed consent from the participants and conducting a hearing test, we began the psychoacoustic experiment.

By mimicking the continuous short inter-pulse interval (IPI) sequential sounds called feeding buzz (Schnitzler and Kalko, 2001) that a bat emits as it captures prey or approaches a target, we created ten successive pitch-converted virtual echoes with an IPI of 35 ms for each sound stimulus. Sound stimuli were delivered to participants through headphones (MDR-CD900ST, Sony, Tokyo, Japan) in a three-interval two-alternative forced choice (3I-2AFC) task using stimulus presentation software (Presentation, Neurobehavioral Systems, Inc., Berkeley, CA). The time interval between the sound stimuli was set at 400 ms. The participants were asked to judge which sound stimulus (the first or third) was identical to the second sound stimulus after listening to all three. They indicated their answers by pressing keys on a numeric keypad.

The participants were trained using only two targets (rectangle and circle) in the training sessions (with answer feedback). In the test sessions (without answer feedback) conducted after the training sessions, the participants were required to discriminate among the four targets. Since we presented two randomly selected sound stimuli in the 3I-2AFC task, there were four presentation patterns for each target pair (e.g., square-square-rectangle, square-rectangle-rectangle, rectangle-rectangle-square, and rectangle-square-square). There were 84 training trials (1 target pair × 4 presentation patterns × 7 repetitions × 3 sessions) in the training sessions, while there were 144 test trials (6 target pairs × 4 presentation patterns × 2 repetitions × 3 sessions) in the test sessions. The participants were allowed to open their eyes only to check if the answer was correct in each training trial, after which they were blindfolded using an eye mask in the test trials.

After the experiment, the participants answered a questionnaire (in Japanese) that asked them which acoustic cues were used for the 2D shape discrimination. The participants were asked to choose from three options: (1) timbre or pitch, (2) intensity, or (3) others. Multiple answers were accepted.

By using R (version 3.6.1), we built generalized linear mixed effect models (GLMMs) assuming a binomial error distribution and a logit link to determine the most influential acoustic feature of the pitch-converted virtual echoes with regard to the 2D shape discrimination performance of the participants, amplitude waveform envelope, energy spectral density (ESD), spectrogram, or intensity. The number of correct answers relative to the number of false answers for each target pair was treated as the response variable. In each GLMM, 60 (6 target pairs × 10 participants) pieces of data (i.e., the number of correct answers relative to the number of false answers for each target pair) were used as the response variable. The acoustic similarity measurements for four acoustic parameters (amplitude waveform envelope, ESD, spectrogram, and intensity) of echoes in each target pair were used as explanatory variables. Because most of the explanatory variables had a multicollinearity (a Pearson's correlation coefficient that exceeded 0.5), we used only one explanatory variable per model. The measures of acoustic similarity of the amplitude waveform envelope, ESD, and spectrogram were defined as the peak values of cross correlation, and the measure of acoustic similarity of intensity was defined as the absolute difference in the ITU-R BS.1770-3 loudness (LUFS). These measures of acoustic similarity were calculated using pitch-converted virtual echoes in the left-side receivers. The participant's ID was added as a random factor to adjust the random intercept estimates [i.e., (1|ID)]. As the values of explanatory variables were the same among the participants (because the participants listened to the same sound stimuli in the experiment), we also included the respective explanatory variable as a random intercept effect to the model. As the slopes of explanatory variables differed for each participant, we also added the respective explanatory variable as a random slope to the random intercept of the participant's ID. As a result, the following random effect structure was used: [(explanatory variable|ID) + (1|explanatory variable)]. All estimation calculations for the GLMMs were conducted using the function “glmer()” (package lme4, version 1.1–23, Bates et al., 2012). The model fit was checked by graphically examining model residuals using the function “binnedplot” (package arm, version 1.11–1, Gelman et al., 2020). The overall model significance was tested by comparing single models to the null model based on the Ward-χ2 tests using the function “anova” (package RVAideMemoire, version 0.9–77, Hervé, 2020). In order to identify the explanatory variable that explained the most data variance, we compared all models with one another using a modified akaike information criterion (AIC) for small sample sizes (AICc) using the function “model.sel” (package MuMIn, version 1.43.17, Barton, 2020).

We also used GLMM analyses to compare the shape discrimination performance of the participants with chance. The experimental condition (test and chance) was treated as the explanatory variable in the GLMMs. The number of correct answers relative to the number of false answers for each target pair was treated as the response variable under the test condition. Because the chance level was 50% in the experiment, the number of correct answers under the chance condition was assumed to be half of the number of trials for each target pair (i.e., 12 out of 24 answers were assumed to be correct for each target pair). The participant's ID was added as a random factor to adjust the random intercept estimates [i.e., (1|ID)]. The estimation calculations for the GLMMs were conducted using the procedure described in the previous paragraph.

Table 1 shows that the average percentage of correct answers for the square and rectangle pairing was the highest [mean ± standard error (SE); 95.4 ± 1.3%] of all pairs, despite the exclusion of the square's echo in the training pair (i.e., the rectangle and circle). Participants did not correctly discriminate between the square and circle at a higher rate than chance (β = 0.009 ± 0.184, z = 0.047, p =0.963), whereas the other target pairs were well discriminated (square and rectangle, β = 3.027 ± 0.335, z =9.046, p < 0.001; square and triangle, β = 1.255 ± 0.205, z = 6.129, p < 0.001; rectangle and triangle, β = 2.704 ± 0.296, z = 9.124, p < 0.001; rectangle and circle, β = 2.847 ± 0.313, z = 9.091, p < 0.001; triangle and circle, β = 2.070 ± 0.243, z = 8.506, p < 0.001). In the questionnaire, only one participant reported use of the intensity cue to discriminate the shapes, whereas most participants (9/10 participants) reported use of the timbre or pitch cue.

Table 1.

Average percentage of correct answers and standard error for each target pair by all ten participants during the test session.

N =10 participants
SquareRectangleTriangleCircle
Square  95.4 ± 1.3 77.3 ± 6.0 50.2 ± 2.5 
Rectangle   93.7 ± 1.2 94.5 ± 1.7 
Triangle    88.7 ± 4.2 
Circle     
    [%] 
N =10 participants
SquareRectangleTriangleCircle
Square  95.4 ± 1.3 77.3 ± 6.0 50.2 ± 2.5 
Rectangle   93.7 ± 1.2 94.5 ± 1.7 
Triangle    88.7 ± 4.2 
Circle     
    [%] 

As shown in Fig. 2, the diffraction wave (white open arrows) from the target edges was observed, which overlapped with the direct wave from the target surfaces at different timings among the targets. The amplitude waveforms (rightmost panels in Fig. 2) and spectrograms (see supplementary material1) of the original virtual echoes created by convolving the simulated echo impulse responses with the FM signal showed different patterns among the targets. The square and circle, which had the lowest discrimination performance in the psychoacoustic experiment (50.2 ± 2.5%), showed similar patterns in the amplitude envelopes of the original virtual echoes (Fig. 2) and in the power spectra of the echo impulse responses for frequency ranges below 15 kHz (i.e., corresponding to approximately 2 kHz of the audible virtual echoes) [Fig. 3(A)]. The power spectra of the echo impulse responses from the triangle, which had a left-right asymmetric shape [Fig. 1(A)], had different patterns between the right and left responses [Fig. 3(B)].

Fig. 2.

(A) Examples of the screenshots of the simulated wave distribution of the echo impulse responses from the four targets (left panel, top view; right panel, side view). The Y coordinates in the top views correspond to each Y coordinate of the transmitter, and the X coordinates in the side views correspond to each X coordinate of the transmitter. White open arrows indicate the diffraction waves from the target edges. (B) Amplitude waveforms of original virtual echoes created by convolving the simulated echo impulse responses with the downward linear frequency-modulated signal.

Fig. 2.

(A) Examples of the screenshots of the simulated wave distribution of the echo impulse responses from the four targets (left panel, top view; right panel, side view). The Y coordinates in the top views correspond to each Y coordinate of the transmitter, and the X coordinates in the side views correspond to each X coordinate of the transmitter. White open arrows indicate the diffraction waves from the target edges. (B) Amplitude waveforms of original virtual echoes created by convolving the simulated echo impulse responses with the downward linear frequency-modulated signal.

Close modal
Fig. 3.

(A) Power spectra of the echo impulse responses from the four targets simulated at each left receiver. (B) Power spectra of the echo impulse responses from the triangle simulated at the left and right receivers. The power spectra of these echo impulse responses were calculated by a fast Fourier transform.

Fig. 3.

(A) Power spectra of the echo impulse responses from the four targets simulated at each left receiver. (B) Power spectra of the echo impulse responses from the triangle simulated at the left and right receivers. The power spectra of these echo impulse responses were calculated by a fast Fourier transform.

Close modal

The GLMM analysis suggested that the participants were less likely to answer correctly when listening to the pitch-converted virtual echoes having similar amplitude envelopes (model 1), ESDs (model 2), and spectrograms (model 3) (Table 2). Combined with the questionnaire results, the participants might perceive the echo differences in amplitude envelope, ESD, and spectrogram as the differences in timbre and pitch. Moreover, the models with degrees of similarity of the ESD (model 2) and spectrogram (model 3) as the explanatory variables demonstrated better results in terms of the preciseness of prediction based on the lower AICcs values, as compared to the other models. Although the participants were less likely to answer correctly when listening to the pitch-converted virtual echoes having similar intensities (model 4), the AICc was the highest of all four models. However, all models explained more variance than the null-model. The effect plots for the GLMMs are shown in the supplementary material.1

Table 2.

Summary of the generalized linear mixed effect model statistics to examine the effective acoustic cues for the 2D shape discrimination.

N =60 (6 target pairs × 10 participants)
ModelFixed effectsEstimationSEzPAICc
Envelope −11.295 2.094 −5.393 <0.001*** 259.2 
Energy spectral density −4.792 1.204 −3.978 <0.001*** 255.8 
Spectrogram −7.219 1.287 −5.609 <0.001*** 257.2 
Intensity 1.796 0.905 1.983 <0.05* 262.4 
N =60 (6 target pairs × 10 participants)
ModelFixed effectsEstimationSEzPAICc
Envelope −11.295 2.094 −5.393 <0.001*** 259.2 
Energy spectral density −4.792 1.204 −3.978 <0.001*** 255.8 
Spectrogram −7.219 1.287 −5.609 <0.001*** 257.2 
Intensity 1.796 0.905 1.983 <0.05* 262.4 

In this study, we developed a new research method using acoustic simulations and psychoacoustic experiments for human echolocation research. Acoustic simulations enable researchers to freely manipulate various types of objects as reflectors in the simulation space without any experimental artifacts. Moreover, the visualization of sound propagation is useful for quantitatively analyzing the relationship between echolocation performance and echo characteristics. Furthermore, human echolocation research can be efficiently performed by applying the proposed method, depending on the specific purpose. In this study, we applied this technology to generate virtual echoes and analyzed the shape discrimination results. From the investigations, the 2D shape discrimination performance varied with the acoustic similarity of the virtual echoes (Table 1, Figs. 2 and 3). From the GLMM analysis, based on the AICcs, it is effective for the 2D shape discrimination to focus on the echo differences in timbre and pitch (i.e., ESD and spectrogram). This is consistent with the results of the questionnaire. In this way, the acoustic simulation approach could more efficiently optimize signal generation for other tasks such as texture, distance, and motion discrimination.

When shape identification was performed without any prior training in previous research, only blind expert echolocators were reportedly able to successfully identify 2D shapes (i.e., square, triangle, and rectangles oriented horizontally and vertically). These expert echolocators were only able to identify the shapes when they could freely move their heads and/or bodies while sensing with their self-produced sounds such as mouth clicks (Milne et al., 2014). Interestingly, this previous study stated that blind expert echolocators could get the “overall best impression of a shape” at a certain distance from the target (i.e., 80 cm). It became difficult to collect the target's edge information from the echo if participants came too close to the target under the condition where the participants had to remain still. It was proposed that blind expert echolocators attempted to consider the edge information contained in echoes to use as an acoustic cue for 2D shape identification. This might be related to the diffraction wave from the edge, which can be confirmed by visualizing the sound wave distribution using the FDTD simulation shown in Fig. 2. From this figure, it can be easily seen that the spectrum and amplitude patterns change due to sound interference, which is the time difference of arrival between the direct sound from the target surface and the diffraction wave from the edge. This indicates that the interference situation with the diffraction wave dynamically altered by actively moving the echolocator's head and body. Furthermore, the echo might contain rich information on the 2D shape.

The average percentage of correct answers for the pair of the square and circle (50.2 ± 2.5%) was almost the same as the chance level (Table 1), whereas the echoes of the square and circle showed dramatically different spectral notch patterns in the high-frequency range of the original virtual echo [>15 kHz, corresponding to approximately 2 kHz of the audible virtual echoes, Fig. 3(A)]. In order to improve the performance of 2D shape discrimination, it is also important to appropriately control the frequency band and time frequency structure of the original sensing signal as well as the pitch conversion rate, according to which type and size of the target should be detected. However, the shape discrimination used in our experiment is easier than the shape identification used in previous human echolocation research (Milne et al., 2014) because the participants in our study could answer simply by judging whether the first and second (or the second and third) sound stimuli were the same. However, the participants in the previous study were not allowed to compare the echoes among the targets and had to answer which target was presented by sensing using self-produced sounds. Therefore, the current results do not suggest that the signal design used in this study for shape discrimination is also the most effective for shape identification. We will need to conduct shape identification experiments in our future studies to examine effective sensing strategies for shape recognition in human echolocation. Moreover, it is also important for human echolocation to listen to binaural echoes convolved with head-related transfer functions. In our previous studies, we proposed a binaural recording system using a miniature dummy head (MDH) (Uchibori et al., 2015) and conducted psychoacoustic experiments using pitch-converted ultrasonic binaural echoes, measured using the MDH (Sumiya et al., 2019). In future studies, we will also conduct acoustic simulations using the MDH, allowing psychoacoustic experiments using pitch-converted ultrasonic binaural echoes, calculated through acoustic simulations.

However, the 3D FDTD simulation used in this study lasted long because of the large calculations owing to the use of high-frequency sounds. By reducing the calculation time, several human echolocation experimental systems can be easily developed in a virtual reality space in further studies. Human listening experiments using dolphin echolocation signals were conducted to examine the echo acoustic features used by dolphins for object recognition (Au and Martin, 1989; DeLong et al., 2007; DeLong, 2016). Because human subjects can verbally communicate with the experimenters, questionnaires for human subjects are effective for examining the echo acoustic features used as acoustic cues. The combination of acoustic simulations and psychoacoustic experiments will provide useful knowledge for studying sensing strategies in human echolocation. However, by following the research method proposed in dolphin biosonar research, we will also be able to gain insights into the sensing strategies of other species by comparing the knowledge gained during future comparative listening experiments.

We would like to thank Dr. Olga Heim for valuable advice on statistical analysis. We would also like to thank Yu Teshima for a valuable discussion of this study. This work was supported by JSPS KAKENHI Grant Nos. JP 18H03786, 16H06542 to S.H., and JP18J01429 to M.S. Finally, we would like to thank the two anonymous reviewers for their valuable comments and helpful suggestions.

1

See supplementary material at https://www.scitation.org/doi/suppl/10.1121/10.0003194 for material properties used for the acoustic simulations; for spectrograms of the original virtual echoes; and for effect plots for the generalized linear mixed effect models.

1.
Au
,
W. W.
, and
Martin
,
D. W.
(
1989
). “
Insights into dolphin sonar discrimination capabilities from human listening experiments
,”
J. Acoust. Soc. Am.
86
,
1662
1670
.
2.
Barton
,
K.
(
2020
). “
Package ‘MuMIn
,’ ” https://CRAN.R-project.org/package=MuMIn (Last viewed July 22, 2020).
3.
Bates
,
D.
,
Maechler
,
M.
,
Bolker
,
B.
,
Walker
,
S.
,
Christensen
,
R. H. B.
,
Singmann
,
H.
,
Dai
,
B.
,
Scheipl
,
F.
,
Grothendieck
,
G.
,
Green
,
P.
, and
Fox
,
J.
(
2012
). “
Package ‘lme4
,’ ” https://CRAN.R-project.org/package=lme4 (Last viewed July 22, 2020).
4.
Bossy
,
E.
,
Talmant
,
M.
, and
Laugier
,
P.
(
2004
). “
Three-dimensional simulations of ultrasonic axial transmission velocity measurement on cortical bone models
,”
J. Acoust. Soc. Am.
115
,
2314
2324
.
5.
DeLong
,
C. M.
(
2016
). “
Human listening experiments provide insight into cetacean auditory perception
,”
Proc. Mtgs. Acoust.
29
,
010001
.
6.
DeLong
,
C. M.
,
Au
,
W. W.
, and
Stamper
,
S. A.
(
2007
). “
Echo features used by human listeners to discriminate among objects that vary in material or wall thickness: Implications for echolocating dolphins
,”
J. Acoust. Soc. Am.
121
,
605
617
.
7.
Gelman
,
A.
,
Su
,
Y.-S.
,
Yajima
,
M.
,
Hill
,
J.
,
Pittau
,
M. G.
,
Kerman
,
J.
,
Zheng
,
T.
, and
Dorie
,
V.
(
2020
). “
Package ‘arm
,’ ” https://CRAN.R-project.org/package=arm (Last viewed July 22, 2020).
8.
Hervé
,
M.
(
2020
). “
Package ‘RVAideMemoire
,’ ” in https://CRAN.R-project.org/package=RVAideMemoire (Last viewed July 22, 2020).
9.
Kolarik
,
A. J.
,
Cirstea
,
S.
,
Pardhan
,
S.
, and
Moore
,
B. C.
(
2014
). “
A summary of research investigating echolocation abilities of blind and sighted humans
,”
Hear. Res.
310
,
60
68
.
10.
Milne
,
J. L.
,
Goodale
,
M. A.
, and
Thaler
,
L.
(
2014
). “
The role of head movements in the discrimination of 2-D shape by blind echolocation experts
,”
Atten. Percept. Psychophys.
76
,
1828
1837
.
11.
Mizuno
,
K.
,
Nagatani
,
Y.
,
Yamashita
,
K.
, and
Matsukawa
,
M.
(
2011
). “
Propagation of two longitudinal waves in a cancellous bone with the closed pore boundary
,”
J. Acoust. Soc. Am.
130
,
EL122
EL127
.
12.
Moss
,
C. F.
, and
Surlykke
,
A.
(
2010
). “
Probing the natural scene by echolocation in bats
,”
Front. Behav. Neurosci.
4
,
33
.
13.
Nagatani
,
Y.
,
Guipieri
,
S.
,
Nguyen
,
V.-H.
,
Chappard
,
C.
,
Geiger
,
D.
,
Naili
,
S.
, and
Haïat
,
G.
(
2017
). “
Three-dimensional simulation of quantitative ultrasound in cancellous bone using the echographic response of a metallic pin
,”
Ultrason. Imag.
39
,
295
312
.
14.
Nagatani
,
Y.
,
Mizuno
,
K.
,
Saeki
,
T.
,
Matsukawa
,
M.
,
Sakaguchi
,
T.
, and
Hosoi
,
H.
(
2008
). “
Numerical and experimental study on the wave attenuation in bone – FDTD simulation of ultrasound propagation in cancellous bone
,”
Ultrasonics
48
,
607
612
.
15.
Nagatani
,
Y.
,
Mizuno
,
K.
,
Saeki
,
T.
,
Matsukawa
,
M.
,
Sakaguchi
,
T.
, and
Hosoi
,
H.
(
2009
). “
Propagation of fast and slow waves in cancellous bone: Comparative study of simulation and experiment
,”
Acoust. Sci. Technol.
30
,
257
264
.
16.
Padilla
,
F.
,
Bossy
,
E.
,
Haiat
,
G.
,
Jenson
,
F.
, and
Laugier
,
P.
(
2006
). “
Numerical simulation of wave propagation in cancellous bone
,”
Ultrasonics
44
,
e239
e243
.
17.
Schnitzler
,
H.-U.
, and
Kalko
,
E. K. V.
(
2001
). “
Echolocation by insect-eating bats
,”
Bioscience
51
,
557
569
.
18.
Sumiya
,
M.
,
Ashihara
,
K.
,
Yoshino
,
K.
,
Gogami
,
M.
,
Nagatani
,
Y.
,
Kobayasi
,
K. I.
,
Watanabe
,
Y.
, and
Hiryu
,
S.
(
2019
). “
Bat-inspired signal design for target discrimination in human echolocation
,”
J. Acoust. Soc. Am.
145
,
2221
2236
.
19.
Thaler
,
L.
, and
Goodale
,
M. A.
(
2016
). “
Echolocation in humans: An overview
,”
WIREs Cogn. Sci.
7
,
382
393
.
20.
Uchibori
,
S.
,
Sarumaru
,
Y.
,
Ashihara
,
K.
,
Ohta
,
T.
, and
Hiryu
,
S.
(
2015
). “
Experimental evaluation of binaural recording system using a miniature dummy head
,”
Acoust. Sci. Technol.
36
,
42
45
.

Supplementary Material