Recalibration is a learning process in which perceptual boundaries between speech-sounds adjust through exposure to a supplementary source of information. Using a dichotic-listening methodology, the experiments reported here establish interaural recalibration—in which an ambiguous speech sound in one ear is recalibrated on the basis of a clear sound presented to the other ear. This demonstrates a previously unknown form of recalibration and shows that location-specific recalibration occurs even when people are unaware of location differences between the sounds involved.

Speech is a highly variable signal and the pronunciation of the sounds of a language varies from region to region, and from person to person. As a result it is necessary for the perceptual system to be somewhat elastic in its categorizing of speech sounds. This elasticity can be demonstrated by means of recalibration. Recalibration consists of an adjustment of the perceptual boundaries between speech sounds because of exposure to supplementary information. For example, in the initial demonstration of recalibration (Bertelson et al., 2003), it was shown that if an ambiguous sound, whose identity is ambiguous between /b/ and /d/, is paired with video of a face pronouncing /b/, people perceive the sound as /b/ (and contrariwise for a video of /d/); after multiple exposures to this supplementary visual information, people's perception adjusts and they continue to hear the ambiguous sound in line with the exposure even after the visual information is removed. The key to recalibration is that some source of supplementary information allows the perceiver to learn into which category the ambiguous speech sound should be categorized. It is not just visual information that can provide the necessary disambiguating information; several different sources of information have been shown to induce this effect: visual (Bertelson et al., 2003), lexical (Norris et al., 2003), reading (Keetels et al., 2016a), and speech imagery (Scott, 2016). The experiments reported below are proof of effect demonstrating a new inducer of recalibration, interaural recalibration, in which information from the left ear provides the necessary information to guide recalibration of ambiguous sounds presented to the right ear and vice versa.

There is a discussion within the literature as to the level(s) of processing at which recalibration operates. That is, whether recalibration demonstrates perceptual learning about phonemes, allophones, some other subphonemic unit, or even lower-level non-linguistic aspects of sound (Bowers et al., 2016; Mitterer and Reinisch, 2013; Reinisch et al., 2014). One piece of evidence to suggest a peripheral locus is that recalibration (at least when induced through visual information) can be ear-specific (Keetels et al., 2015, 2016b). That is, an ambiguous sound can be recalibrated to be heard as a one speech-category in the left ear, but simultaneously heard as a different category in the right ear. In these studies, sounds ambiguous between /b/ and /d/ were presented to either the left or right ear while participants viewed a face pronouncing either /b/ or /d/. After exposure to this audiovisual training, participants showed an ear-specific recalibration—continuing to hear the ambiguous sound as /b/ in whichever ear had been trained to hear it as /b/ during audiovisual exposure, and contrariwise for /d/. That location-coding is a part of audiovisual recalibration suggests that the aspects of the sound being learned about are pre-linguistic (or “acoustic”), rather than phonological, as it is assumed that factors such as location are abstracted away at linguistic levels of analysis (Keetels et al., 2016b).

The current experiment extends this ear-specific recalibration to an extreme level in which not only is recalibration ear specific, but participants are not even consciously aware of which ear is hearing which sound (this lack of awareness is established in experiment two). Participants were presented with a sound ambiguous between /ibih/ and /idih/ in one ear while simultaneously hearing a clear token of /ibih/ or /idih/ in the other ear. These sounds shared identical prosodic properties and so were perceived as a single fused sound whose identity matched that of the clear token (this is confirmed in experiment two). After multiple exposures to such fused sounds, in a test phase participants categorized tokens of the ambiguous sound presented to either their left or right ear. Participants showed recalibration in the ear that heard the ambiguous sound during training, but no effect in the other ear. This is thus ear-specific recalibration in which the information necessary to induce recalibration in the left ear is coming from the right ear and vice versa. That ear-specific recalibration can be this extreme supports the view that recalibration, at least in experiment designs such as this, targets low-level, possibly pre-linguistic, sound representations.

The experiment was a typical psychophysical recalibration experiment using the same materials and general procedure as in Scott (2016), in which recalibration was induced via speech imagery. The current experiment consisted of multiple exposure phases, in which participants heard the recalibration-inducing fused sounds, interleaved with test-phases in which participants were tested only on monaural ambiguous sounds.

2.1.1 Participants

There were 32 participants. All were Arabic-speaking and female. The average age was 21.56 [2.03 standard deviation (sd)]. Participants were paid or given course credit for their participation.

2.1.2 Stimuli

Fused dichotic stimuli were created in which one audio channel was a sound ambiguous between the nonsense disyllables /ibih/ and /idih/, the other channel contained a clear /ibih/ or /idih/ token. The sounds were taken from a 10 000 step /ibih/ - /idih/ continuum created with STRAIGHT (Kawahara et al., 1999), using naturally recorded tokens from a native speaker of Arabic. The prosody of all steps of the continuum was identical, so the two sounds fused into a single unambiguous percept whose perceived identity was that of the clear token, whichever ear heard that. Furthermore, participants were unable to distinguish which ear received the clear token or ambiguous sound. Both of these facts were determined by a separate control experiment (experiment two). The structure of these dichotic stimuli is represented in Fig. 1.1

Fig. 1.

(Color online) Fused dichotic stimuli.

Fig. 1.

(Color online) Fused dichotic stimuli.

Close modal

The maximally ambiguous token of the continuum (the point on the continuum where the participant is estimated to hear the sound 50% of the time as /idih/) was calculated for each participant via staircase procedure (Cornsweet, 1962). The procedure consisted of two interleaved staircases with random switching, one starting at point 2400 on the continuum, the other at point 7600. There were 11 reversals with decreasing step sizes of: 1250, 700, 400, 250, 100, 50, 25, 10, 5, 2, 1. The estimated perceptual boundary for each participant was calculated using a logistic regression over all of the data from her two staircases.

2.1.3 Procedure

There were four conditions: 2 clear sounds (/ibih/ or /idih/) by 2 ears (left or right). Each of these four conditions [(1) Clear /ibih/ Left, (2) Clear /idih/ Left, (3) Clear /ibih/ Right, (4) Clear /idih/ Right], was presented to each participant 8 times for a total of 32 blocks (presented in random order). Each block consisted of an exposure phase followed by a test phase. In the exposure phase, participants were presented with 10 repetitions of one of the fused dichotic sounds (1.05 s ISI). There was then a 2.25 s pause, after which participants categorized the three ambiguous sounds (48%, 50%, and 52% points on the continuum) presented once each to each ear in random order (so 6 categorizations per block). The ear that received the clear sound was consistent within an exposure block but counterbalanced across exposure blocks. A diagram of an example block from the procedure is shown in Fig. 2 (this only shows one of the four conditions: Clear /idih/ Right).

Fig. 2.

(Color online) Experiment one design.

Fig. 2.

(Color online) Experiment one design.

Close modal

The experiment was run on the Psychopy platform (Peirce, 2007) in the phonetics laboratory at Qatar University. Audio stimuli were presented at a comfortable listening level over Extreme Isolation closed-back headphones.

A repeated measures analysis of variance (ANOVA), using the “ez” package (Lawrence, 2013) within R (R Core Team, 2014), found an interaction of Exposure Ear (ear which heard the clear sound during Exposure) by Target Ear (ear which heard the target sounds during test) by Clear Sound (which sound—/ibih/ or /idih/—was being induced through training in the exposure phase) [F(1,31) = 18.24, p < 0.001]. The data were split by Exposure Ear. Follow-up ANOVAs found an interaction of Target Ear by Clear Sound for both Exposure Ears [Right Ear Exposure, F(1,31) = 6.73, p = 0.014], [Left Ear Exposure, F(1,31) = 5.679, p = 0.023]. Planned t-tests showed participants perceived more /idih/ in the left ear when a clear /idih/ had been presented to the right ear during exposure, in comparison to when the clear sound had been /ibih/ [t = 2.813, df = 31, p = 0.008]. The same significant effect was found for testing the right ear when /idih/ had been presented to the left ear in the exposure phase [t = 2.757, df = 31, p = 0.009]. There was no such recalibration for either ear when the target ear in the test-phase had heard the clear sound during exposure (Right Ear [t = 0.115, df = 31, p = 0.909]; Left Ear [t = 0.742, df = 31, p = 0.464]). These results are shown in Fig. 3 and are summarized in Table 1.

Fig. 3.

Experiment one results: The data are scored as % of targets categorized as /idih/. Standard error bars are shown. These error bars were calculated using the Cousineau (2005) and Morey (2008) methods to adjust for repeated-measures design. N.B. The y axis is truncated.

Fig. 3.

Experiment one results: The data are scored as % of targets categorized as /idih/. Standard error bars are shown. These error bars were calculated using the Cousineau (2005) and Morey (2008) methods to adjust for repeated-measures design. N.B. The y axis is truncated.

Close modal
Table 1.

Summary of results—numbers represent % of test trials heard as /idih/ with sd in brackets. Pairs of values with a significant difference indicating recalibration are in bold.

Exposure Sound Left EarExposure Sound Right Ear
Exposure Sound:ibihidihibihidih
Test Sound Left Ear 55.21% (28.49) 57.23% (26.38) 51.89% (26.26) 58.98%(24.43) 
Test Sound Right Ear 49.67%(30.9) 57.94%(26.74) 55.73% (28.7) 55.47% (29.96) 
Exposure Sound Left EarExposure Sound Right Ear
Exposure Sound:ibihidihibihidih
Test Sound Left Ear 55.21% (28.49) 57.23% (26.38) 51.89% (26.26) 58.98%(24.43) 
Test Sound Right Ear 49.67%(30.9) 57.94%(26.74) 55.73% (28.7) 55.47% (29.96) 

While the percentage-based data were sufficiently normally-distributed for ANOVA to give accurate results, these data were derived from binary choice trials (whether the participant heard /ibih/ or /idih/). Thus, out of an abundance of caution, a generalized linear mixed-effects model was built using the lme4 package (Bates et al., 2015) in R. The model was built with a logistic linking function testing the three way interaction of Clear Sound by Exposure Ear by Target Ear and all two-way interactions and main effects. The two effects of interest (Clear Sound and the three-way interaction of Clear Sound by Exposure Ear by Target Ear) were included in a random effects structure along with participant (with random slopes and intercepts). Both the effect of Clear Sound and the three-way interaction of Clear Sound by Exposure Ear by Target Ear were significant (p < 0.01). The two-way interactions of Clear Sound by Target Ear and Clear Sound by Target Ear were also significant (p < 0.001).

When the ambiguous sounds of the test continuum were presented to the same ear that had been exposed to the ambiguous sound during exposure, recalibration occurred; that is, participants continued to categorize the sound in line with the category they had perceived during the exposure phase. When the ambiguous sound was presented to the ear that had heard the clear sound during exposure no such recalibration was found. This demonstrates that the right ear can have its speech boundaries recalibrated by information coming from the left ear and vice versa. This constitutes a new form of recalibration. It should be noted that the shift in perception in this study was quite small, unlike the level of recalibration shown by other inducers of recalibration such as vision and lexical information. It should also be noted that the recalibration found here appears to be carried by the /idih/ exposure sound as being exposed to /ibih/ does not lower the percentage of /idih/ reported at test below 50%. However, not too much weight can be placed on this observation, as the shifts are small and the staircase method is merely an estimate of the perceptual boundary—if the estimate were off by a small amount this observation would be moot.

Multiple repetitions of an unambiguous sound typically induces adaptation—ambiguous tokens presented afterward are less likely to be categorized as the same category as the clear sound (Eimas and Corbit, 1973). The fact that there was no significant adaptation in the ear which heard the clear sound during the exposure phase suggests that some information from the ambiguous sound presented to the opposite ear is mitigating the adaptation that would normally occur.

This new form of recalibration is important for two reasons. First, it establishes another inducer of recalibration and thus extends our knowledge of what sources of information are sufficient to induce recalibration. Second, this recalibration takes the ear-specific recalibration demonstrated by Keetels et al. (2015) to an extreme, and suggests that perceivers do not even need to be aware of a difference in sounds between the two ears to show differential learning in the two ears. Such an extreme form of ear-specific recalibration supports the claim that at least some forms of recalibration are targeting very low-level aspects of sound rather than phonological coding. This issue is taken up in Sec. 4.

The experiment above suggests that participants can experience location-specific recalibration without being aware of the location. However, that participants were unaware of the location of the clear versus ambiguous sounds was not tested in experiment one. In order to ensure that participants are genuinely unaware of the difference between sounds being presented to their left and right ears in the exposure phase of experiment one, a second experiment was conducted which tested participants ability to distinguish the sounds in their left and right ears (as well as confirming that they perceived the fused stimuli as the same category of the clear token). This shows that not only does location-specific learning not require conscious awareness of location, it also rules out any possibility of strategic responding in experiment one. Experiment two also confirms that the fused stimuli were perceived in line with the category of the clear sound, as assumed in experiment one.

3.1.1 Participants

There were 12 female participants. All were Arabic speakers. The average age was 20.33 (1.88 sd). All participants were students at Qatar University and were either paid or given course credit for their participation.

3.1.2 Procedure

The same staircase procedure as in experiment one was used to determine the ambiguous sounds and so to create the same participant-specific dichotic stimuli as in experiment one. After this, participants did 28 blocks of categorizing the fused dichotic stimuli. On half of the blocks, participants categorized the stimuli as either /ibih/ or /idih/, on the other half they indicated whether the left or right ear heard the clear token of the dichotic stimuli. Blocks were presented in random order. In each block participants categorized 12 tokens: the three ambiguous sounds (48%, 50%, and 52% points on the continuum) presented twice each to each ear in random order.

Participants perceived 79.3% of the fused sounds as /idih/ when the clear channel of the audio was /idih/ and 11% /idih/ when the clear channel was /ibih/. This difference was significant [t = 10.984, p < 0.001]. Participants were unable to distinguish which ear heard the clear token—accuracy was in fact slightly (and non-significantly) below chance (48.8% accurate) [t = −0.995, p = 0.341]. These results are shown in Fig. 4.

Fig. 4.

Experiment two results: The graph on the left shows % of targets categorized as /idih/. The graph on the right shows % correct on categorizing which ear heard the clear sound. Standard error bars are shown. These error bars were calculated using the Cousineau (2005) and Morey (2008) methods to adjust for repeated-measures design.

Fig. 4.

Experiment two results: The graph on the left shows % of targets categorized as /idih/. The graph on the right shows % correct on categorizing which ear heard the clear sound. Standard error bars are shown. These error bars were calculated using the Cousineau (2005) and Morey (2008) methods to adjust for repeated-measures design.

Close modal

Experiment two serves as a control, demonstrating that participants perceived the dichotic stimuli in line with the clear sound and that participants were unable to distinguish which ear heard the clear versus ambiguous tokens making up the fused dichotic sounds.

These experiments show that even when participants are unable to distinguish the sounds presented to their left and right ears, they can still learn separate patterns of categorization in each ear. This constitutes a novel form of recalibration in which the disambiguating information used to train categorization of the ambiguous sound in one ear is an unambiguous sound in the other ear. Location-tagging (at least at the conscious level) cannot explain these results as participants could not determine which ear heard the clear/ambiguous sound.

Reinisch and Mitterer (2016) argue for a dichotomy between recalibration studies using lexically-induced recalibration, which tend to show more generalization of recalibration to novel stimuli, and visually-induced recalibration where recalibration appears to be more tied to specific stimuli [using a novel design, Reinisch and Mitterer (2016) were, however, able to demonstrate some generalization for visually-induced recalibration]. The design of the current study is very much like that of a typical visual recalibration study, in which participants hear multiple repetitions of the target sounds in the same context. The results of the current study are also in line with visual-recalibration studies in that learning was to a very specific context (in this experiment, location).

These experiments represent a surprisingly extreme form of ear-specific learning, since the participants were not able to consciously distinguish which ear heard which sound. This would support the claim (at least for experiment setups like this one) that recalibration represents learning about low-level, possibly non-linguistic, aspects of sound. This is because location is unlikely to be used in speech processing and is likely to be abstracted away at linguistic levels of speech perception (Keetels et al., 2016b).

This informs a central issue in recalibration and indeed in speech perception—what are the units of speech processing/learning? It is assumed that in speech perception the acoustic signal is converted into abstract sublexical units and that these units are what are involved in recalibration (Mitterer and Reinisch, 2017). If correct, this means that recalibration studies can be used to probe the units of perception involved in speech perception in general. There is a debate about exactly what these abstract units of perception are—are they phonemes?, allophones?, articulatory features?, acoustic features? This issue has been examined by testing which aspects of the speech signal are generalized to novel stimuli in recalibration. While it has been shown that recalibration generalizes to the same sounds in different syllable positions (Jesse and McQueen, 2011), suggesting that recalibration involves a degree of linguistic abstraction across syllable position variants, other studies have found that recalibration is very context-sensitive, with recalibration failing to generalize to articulatorily different allophones of the same phoneme (Mitterer et al., 2013) or even to the same phoneme in very slightly different contexts (Reinisch et al., 2014). Thus, recalibration appears to be fairly closely tied to the specific stimuli involved (Reinisch and Mitterer, 2016), and this suggests that the units that are being learned about are not very abstract, as greater abstraction should lead to differences between variants being increasingly irrelevant.

Indeed, the units learned about in recalibration may not be specific to speech perception, but merely a part of general sound processing. That is, in experimental designs such as this (which would include most visually-induced examples of recalibration), the participant may not be adjusting units within their speech perception system, but rather performing associative learning in which low-level aspects of the acoustic signal (such as, for example, pitch or intensity profile) are associated with a phonetic category. If this is the case it would mean that the value of recalibration studies, at least of this kind, for probing the structure of speech perception is very limited. Future research is planned which will explore the level of phonetic similarity between exposure and test sounds needed to induce interaural recalibration, and this may help narrow down the level of processing involved.

An obvious issue that remains to be determined is exactly where this recalibration is occurring. A highly speculative possibility is that the learning is occurring independently in the left and right hemispheres of the brain. This possibility is suggested by the dichotic setup of this experiment in the exposure phase—dichotic presentation tends to inhibit the ipsilateral auditory pathway (Brancucci et al., 2004) and thus the hemispheres work with information coming primarily from the contralateral ear (with a small amount of information being shared across the corpus callosum). Thus, in this dichotic setup, the processing of low-level acoustic information would possibly be done independently in each hemisphere. Recalibration by each brain hemisphere separately would seem very surprising given that the right hemisphere is generally believed to have no capacity for phonological segmental processing (Gazzaniga, 2000; Lindell, 2006). It is possible to reconcile this difficulty if we accept that in recalibration experiments such as this, what is being learned is not phonological information but simply an association between a particular sound (or aspect of a sound) and a phonetic category label, so no strictly phonological processing is needed. Under this interpretation, the right-hemisphere would not be remapping spectral information to phonetic categories, but rather would be doing a simple form of conditioning, in which it learns to associate low-level (e.g., prosodic) information with a phonetic category. An alternative explanation is that the left hemisphere is performing the necessary learning (the hemisphere that is generally believed to do all phonological processing (Lindell, 2006)). If it is the left hemisphere that is learning different categorization patterns for each ear, then this experiment represents a surprisingly extreme form of location-specific learning, since the participants were not able to consciously distinguish which ear heard which sound. However, perhaps location-tagging does not require conscious awareness of the differences between the sounds presented to the left and right ears.

A final possibility is that information from both ears is being integrated in the left-hemisphere (with left-ear information being routed through the corpus callosum) and the different levels of recalibration are due to the competing forces of recalibration and adaptation; with ear-specific peripheral adaptation (Sawusch, 1977) canceling out the effects of recalibration in the ear that was exposed to the clear sound during exposure, while the contralateral ear experiences no peripheral adaptation and so recalibration dominates.

In summary, the experiments reported here are a proof of effect of a new inducer of recalibration and also provide support for the claim that the information being recalibrated in this type of experiment is very low-level.

Participants were run by research assistants Jawaher Alkahlout, Maryam Aref, and Rofida Ibrahim. This research was funded by Qatar University (Grant No. QUUG-CAS-ELL-17-18-7).

1

The head in this and Fig. 2 is from “Emoji One” with a Creative Commons 4.0 license, and the headphones are from “KDE open software” with a General Public license.

1.
Bates
,
D.
,
Mächler
,
M.
,
Bolker
,
B.
, and
Walker
,
S.
(
2015
). “
Fitting linear mixed-effects models using lme4
,”
J. Stat. Software
67
(
1
),
1
48
.
2.
Bertelson
,
P.
,
Vroomen
,
J.
, and
de Gelder
,
B.
(
2003
). “
Visual recalibration of auditory speech identification: A McGurk aftereffect
,”
Psychol. Sci.
14
(
6
),
592
597
.
3.
Bowers
,
J. S.
,
Kazanina
,
N.
, and
Andermane
,
N.
(
2016
). “
Spoken word identification involves accessing position invariant phoneme representations
,”
J. Memory Lang.
87
,
71
83
.
4.
Brancucci
,
A.
,
Babiloni
,
C.
,
Babiloni
,
F.
,
Galderisi
,
S.
,
Mucci
,
A.
,
Tecchio
,
F.
,
Zappasodi
,
F.
,
Pizzella
,
V.
,
Romani
,
G. L.
, and
Rossini
,
P. M.
(
2004
). “
Inhibition of auditory cortical responses to ipsilateral stimuli during dichotic listening: Evidence from magnetoencephalography
,”
Euro. J. Neurosci.
19
(
8
),
2329
2336
.
5.
Cornsweet
,
T. N.
(
1962
). “
The staircase-method in psychophysics
,”
Am. J. Psychol.
75
(
3
),
485
491
.
6.
Cousineau
,
D.
(
2005
). “
Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson's method
,”
Tuturorials Quan. Methods Psychol.
1
(
1
),
42
45
.
7.
Eimas
,
P. D.
, and
Corbit
,
J. D.
(
1973
). “
Selective adaptation of linguistic feature detectors
,”
Cognitive Psychol.
4
(
1
),
99
109
.
8.
Gazzaniga
,
M. S.
(
2000
). “
Cerebral specialization and interhemispheric communication: Does the corpus callosum enable the human condition?
,”
Brain
123
(
7
),
1293
1326
.
9.
Jesse
,
A.
, and
McQueen
,
J. M.
(
2011
). “
Positional effects in the lexical retuning of speech perception
,”
Psychonomic Bull. Rev.
18
(
5
),
943
950
.
10.
Kawahara
,
H.
,
Masuda-Katsuse
,
I.
, and
de Cheveigne
,
A.
(
1999
). “
Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: Possible role of a repetitive structure in sounds
,”
Speech Commun.
27
,
187
207
.
11.
Keetels
,
M.
,
Pecoraro
,
M.
, and
Vroomen
,
J.
(
2015
). “
Recalibration of auditory phonemes by lipread speech is ear-specific
,”
Cognition
141
,
121
126
.
12.
Keetels
,
M.
,
Schakel
,
L.
,
Bonte
,
M.
, and
Vroomen
,
J.
(
2016a
). “
Phonetic recalibration of speech by text
,”
Attn., Percept., Psychophys.
78
(
3
),
938
945
.
13.
Keetels
,
M.
,
Stekelenburg
,
J. J.
, and
Vroomen
,
J.
(
2016b
). “
A spatial gradient in phonetic recalibration by lipread speech
,”
J. Phonetics
56
,
124
130
.
14.
Lawrence
,
M. A.
(
2013
). ez: Easy analysis and visualization of factorial experiments, http://CRAN.R-project.org/package=ez, r package version 4.4 (Last viewed 10/30/2016).
15.
Lindell
,
A. K.
(
2006
). “
In your right mind: Right hemisphere contributions to language processing and production
,”
Neuropsychol. Rev.
16
(
3
),
131
148
.
16.
Mitterer
,
H.
, and
Reinisch
,
E.
(
2013
). “
No delays in application of perceptual learning in speech recognition: Evidence from eye tracking
,”
J. Memory Lang.
69
(
4
),
527
545
.
17.
Mitterer
,
H.
, and
Reinisch
,
E.
(
2017
). “
Surface forms trump underlying representations in functional generalisations in speech perception: The case of German devoiced stops
,”
Lang., Cognit. Neurosci.
32
(
9
),
1133
1147
.
18.
Mitterer
,
H.
,
Scharenborg
,
O.
, and
McQueen
,
J. M.
(
2013
). “
Phonological abstraction without phonemes in speech perception
,”
Cognition
129
(
2
),
356
361
.
19.
Morey
,
R. D.
(
2008
). “
Confidence intervals from normalized data: A correction to Cousineau (2005)
,”
Tutorials Quan. Methods Psychol.
4
(
2
),
61
64
.
20.
Norris
,
D.
,
McQueen
,
J. M.
, and
Cutler
,
A.
(
2003
). “
Perceptual learning in speech
,”
Cognitive Psychol.
47
(
2
),
204
238
.
21.
Peirce
,
J. W.
(
2007
). “
PsychoPy—Psychophysics software in Python
,”
J. Neurosci. Methods
162
(
1-2
),
8
13
.
22.
R Core Team
(
2014
).
R: A Language and Environment for Statistical Computing
(
R Foundation for Statistical Computing
,
Vienna, Austria
), http://www.R-project.org/ (Last viewed 4/23/2018).
23.
Reinisch
,
E.
, and
Mitterer
,
H.
(
2016
). “
Exposure modality, input variability and the categories of perceptual recalibration
,”
J. Phonetics
55
,
96
108
.
24.
Reinisch
,
E.
,
Wozny
,
D. R.
,
Mitterer
,
H.
, and
Holt
,
L. L.
(
2014
). “
Phonetic category recalibration: What are the categories?
,”
J. Phonetics
45
,
91
105
.
25.
Sawusch
,
J. R.
(
1977
). “
Peripheral and central processes in selective adaptation of place of articulation in stop consonants
,”
J. Acoust. Soc. Am.
62
(
3
),
738
750
.
26.
Scott
,
M.
(
2016
). “
Speech imagery recalibrates speech-perception boundaries
,”
Attn., Percept. Psychophys.
78
(
5
),
1496
1511
.