Age-related changes in auditory processing may reduce physiological coding of acoustic cues, contributing to older adults' difficulty perceiving speech in background noise. This study investigated whether older adults differed from young adults in patterns of acoustic cue weighting for categorizing vowels in quiet and in noise. All participants relied primarily on spectral quality to categorize /ɛ/ and /æ/ sounds under both listening conditions. However, relative to young adults, older adults exhibited greater reliance on duration and less reliance on spectral quality. These results suggest that aging alters patterns of perceptual cue weights that may influence speech recognition abilities.

Following a conversation in background noise can be a challenging task even for young, normal hearing individuals. Perception of the acoustic cues that contrast speech sounds is vital for their accurate identification (Stevens, 1980), but background noise often masks these cues. Target speech and competing sounds often overlap in their spectral energy and interfere with one another in the auditory periphery. Similarly, noise can fill in the silent gaps within speech sounds that are important for their recognition. Research has shown that broadband noise affects perception of consonants (Miller and Nicely, 1955; Wang and Bilger, 1973; Woods , 2010) and vowels (Parikh and Loizou, 2005; Swanepoel , 2012), decreasing their identification and resulting in confusions with similar phonemes by interfering with certain aspects of the speech sound acoustics.

Perceiving speech in noise is particularly difficult for older adults. This population experiences age-related hearing loss, neural demyelination, and auditory neuropathy that decrease coding fidelity of auditory signals (Parthasarathy and Kujawa, 2018; Tremblay , 2002) and contribute to challenges hearing speech in noise (e.g., Anderson , 2011). Age-related changes in the auditory system affect acoustic cue perception; many prior studies have found that older adults require larger acoustic cue differences relative to young adults to differentiate between sounds in quiet (e.g., Fitzgibbons and Gordon-Salant, 1995; Vongpaisal and Pichora-Fuller, 2007). Reductions in auditory coding fidelity with age, and the resulting challenges with acoustic cue perception, are likely exacerbated when coding multiple simultaneous signals, such as listening to target speech in background noise. As acoustic-phonetic cues are the building blocks of speech sounds, age-related changes in perception of those cues in noisy environments may play a role in older adults' challenges perceiving speech in noise.

Perceptual cue weighting tasks are a method to investigate functional use of acoustic cues for speech categorization. In these tasks, multiple cues that are used to identify a speech contrast are manipulated to determine an individual's relative weighting of those cues during categorization of that contrast. Perceptual cue weighting paradigms provide information about basic sensory perception (DiNino , 2020; Winn , 2012), are sensitive to listening condition (Winn , 2013), and are less likely to be affected by age-related declines in cognitive factors such as working memory than is a traditional speech-in-noise perception task (Vermeire , 2019).

The results of prior perceptual cue weighting tasks in older adults indicate that perceptual reliance on individual acoustic cues changes with age (Toscano and Lansing, 2019) and with age-related hearing loss (Scharenborg , 2015; Souza , 2015; Souza , 2018). These findings suggest that the aging process alters the way in which individuals use certain acoustic cues to identify speech sounds under quiet listening conditions. Still, it is not yet known how older adults' perceptual weighting of acoustic-phonetic cues may differ from those of young adults when noise is added. Older adults may be unable to effectively perceive the acoustic cues that have been shown to be important for young adults' speech categorization, particularly in noise. Impaired use of acoustic cues may be the link between reduced physiological processing and challenges perceiving speech in background noise.

The current study utilized a perceptual cue weighting paradigm with an /ɛ/-/æ/ contrast, in which spectral content is the primary cue for categorization by young adults in quiet and duration is a secondary cue (Hillenbrand , 2000). This contrast was chosen because age-related declines in periodicity coding alter neural coding of spectral cues. In addition, this contrast enabled investigation of the effects of masking noise on trading relations between primary and secondary cues for vowel identification. Young and older adults categorized “SET” and “SAT” tokens during an online listening task. The effects of masking noise and participant age group on perceptual cue weights for /ɛ/-æ/ categorization were examined.

Fifty-four young adults aged 18–30 years (mean age = 25.5 years) and 50 older adults aged 55+ years (mean age = 62.0 years) participated in this study. An online experiment was used to obtain a participant sample that better represented the broader population than would a participant group from the local area. The perceptual cue weighting paradigm was created in gorilla experiment builder (Gorilla, 2024; Anwyl-Irvine , 2020) and hosted on Prolific (2024). The study was only visible to individuals with Prolific accounts who were of the appropriate ages, self-reported no hearing loss, were located in the United States, and whose native language was English. Participants additionally checked boxes on the online consent form to confirm that they had no hearing loss and that they were native speakers of American English (no other languages spoken before age 2 years). All participants gave informed consent by checking boxes to indicate that they read the consent form and agreed to participate in the study. Participants were compensated after completion of the study. Study procedures were approved by the Carnegie Mellon University Institutional Review Board.

Stimuli consisted of “SET” and “SAT” words originally used by Liu and Holt (2015). The sounds were naturally spoken by a female talker with slightly elongated, but equivalent, vowel duration. These recordings served as end point stimuli for the “long” duration end of the cue continuum. The steady-state portions of the vowels were removed at the zero waveform crossings and the first four formant trajectories were extracted using praat (Boersma and Weenink, 2022). Formant trajectory values at seven equal steps between /ɛ/ and /æ/ across all frequencies were calculated in r (R Core Team, 2021) and then used in praat to generate stimuli on a seven-step continuum of vowel spectra. This dimension will be referred to as “spectral quality” because formant frequencies across the entire spectrum were gradually shifted from /ɛ/ and /æ/. Figure 1 shows spectrograms and formant values of the end point and midpoint stimuli.

Fig. 1.

Spectrograms of the vowel portion for the spectral quality cue end point and midpoint stimuli. Stimuli shown are for the “long” duration stimuli, duration step 7. In each spectrogram, frequency in hertz is on the y axis, time in seconds is on the x axis, and darker colors indicate greater energy. The first four formant values (F1–F4) for each stimulus were measured at the midpoint of the vowel portion and are listed below the respective spectrogram.

Fig. 1.

Spectrograms of the vowel portion for the spectral quality cue end point and midpoint stimuli. Stimuli shown are for the “long” duration stimuli, duration step 7. In each spectrogram, frequency in hertz is on the y axis, time in seconds is on the x axis, and darker colors indicate greater energy. The first four formant values (F1–F4) for each stimulus were measured at the midpoint of the vowel portion and are listed below the respective spectrogram.

Close modal

Each stimulus was then reduced in seven steps of vowel duration from 475 to 175 ms in steps of 50 ms to generate 49 vowel sounds varying on continua of both spectral quality and vowel duration. The /s/ and /t/ segments from the original stimulus recordings were extracted and concatenated with the vowel sounds in phase alignment at the zero-crossing to recreate “SET” and “SAT” tokens.

A publicly available praat script (Winn, 2024) was used to generate broadband speech-shaped noise to provide steady-state energetic masking of the stimuli. The long-term average spectrum of the noise matched that of a talk radio segment rather than the stimuli themselves for greater ecological validity. A random sample of broadband noise was concatenated with each speech token in matlab (MathWorks, Inc., 2021) to create stimuli for the noise condition. The noise was equivalent to 0 dB signal-to-noise ratio (SNR) of the root-mean-square value of each token, a SNR that is challenging but realistic (e.g., Wu , 2018).

Participants completed the experiment online using their personal computers and were asked to wear headphones and perform the task in a quiet listening environment. They first completed a headphone screening to ensure that they were meeting these requirements. This headphone screener relies on the Huggins pitch phenomena and involves presenting phase-shifted white noise in one ear but not the other, giving the illusion of a faint tone when the individual inputs to each ear are combined, such as that which occurs when wearing headphones. The illusion does not occur when listeners hear the sounds over computer speakers or when too much background noise is present for the listener to perceive the individual sounds (Milne , 2021). Listeners who failed the headphone screening were given the opportunity to try again. Listeners who failed a second time did not continue with the study.

Following completion of the headphone check, participants performed the /ɛ/-/æ/ categorization task. On each trial, one stimulus was played and participants clicked boxes on the computer screen labelled “SET” or “SAT” to indicate what they heard. There was no time limit for responding, and the task did not advance to the next trial until the participant clicked on either “SET” or “SAT.” Participants performed the task in noise first, followed by the task in quiet, to avoid any potential learning effects of hearing the clear sounds before the sounds in competing noise. The noise and quiet conditions were each divided into six blocks, with each block consisting of all 49 stimuli presented in random order. Participants were encouraged to take breaks between blocks.

Prior to beginning the test blocks, participants set their computer volume to a comfortable listening level, while random stimuli from the task were played to ensure audibility of the sounds. They then performed a practice block in which each end point stimulus was presented five times in quiet so that they were familiarized with the task procedure. Data from the practice block were not included in the analyses.

Data were analyzed using r (R Core Team, 2021). Generalized logistic mixed-effect regression models (GLMMs) were conducted using the lme4 package (Bates , 2015). Response (“SET” or “SAT,” coded as 0 and 1) was the dependent variable, and age group (young or older adult, default = young adult) and condition (quiet or noise, default = quiet) were independent variables. Two models were run to test the separate hypotheses that age group and noise would affect perceptual cue weights of (1) duration and (2) spectral quality. The correlated relationship between perception of these cues also necessitated separation of these analyses—if perceptual weight of one cue decreases between quiet and noise, then perceptual weight on the other cue will increase between quiet and noise. The centered continuum steps of vowel duration and spectral quality were respectively included as independent variables in each model. The models also each contained interaction terms between independent variables as well as random slope and intercept effects of cue weights per participant to account for the repeated measure of participants performing the same task under two listening conditions.

3.1.1 Model intercept effects

Results of the GLMM analyses indicated significant interactions with intercept effects for duration cue step (β = 0.14, z = 4.52, P < 0.001), spectral quality cue step (β = 1.30, z = 18.71, P < 0.001), and listening condition (duration, β = 0.10, z = 4.18, P < 0.001; spectral quality, β = 0.16, z = 4.92, P < 0.001). These results indicate that participants' tendency to respond “SET” or “SAT” were significantly influenced by the step of duration and spectral quality that each presented stimulus contained, as well as whether the stimuli were played in quiet or in noise. No significant intercept effect of group was found, nor was there an interaction between intercept, group, or listening condition for either cue, demonstrating that young and older listeners did not differ in their overall tendency to respond “SET” or “SAT” and that listening condition did not differently affect response bias between young and older adults.

3.1.2 Spectral quality and duration cue slope effects

Figures 2 and 3 show the psychometric functions for both age groups for spectral quality (Fig. 2) and duration (Fig. 3), with cue continuum step on the x axes and vowel perception on the y axes. Both groups of participants relied primarily on spectral quality for vowel categorization in quiet and in noise, indicated by steeper psychometric functions for spectral quality (Fig. 2) than for duration (Fig. 3). However, the GLMMs revealed significant interactions between cue slopes and condition, indicating that masking noise significantly affected perception of both cues. Relative to the quiet condition, spectral quality cue weight slopes became shallower (β = −0.23, z = −10.99, P < 0.001, indicating less reliance on the cue [see right panel of Fig. 2]) and duration cue weight slopes became steeper (β = 0.13, z = 10.70, P < 0.001, indicating higher reliance on the cue [see right panel of Fig. 3]) when noise was added.

Fig. 2.

Psychometric functions of spectral quality perceptual cue weights for each age group in quiet (left panel) and in noise (right panel). Proportions of /ae/ responses (y axis) are shown as a function of the spectral quality cue continuum step (x axis). Steeper psychometric functions indicate greater perceptual weighting of the cue. Data from young adults are represented by light blue triangles, and data from older adults are represented by dark blue circles. Error bars represent ±1 standard error of the mean.

Fig. 2.

Psychometric functions of spectral quality perceptual cue weights for each age group in quiet (left panel) and in noise (right panel). Proportions of /ae/ responses (y axis) are shown as a function of the spectral quality cue continuum step (x axis). Steeper psychometric functions indicate greater perceptual weighting of the cue. Data from young adults are represented by light blue triangles, and data from older adults are represented by dark blue circles. Error bars represent ±1 standard error of the mean.

Close modal
Fig. 3.

Psychometric functions of duration perceptual cue weights for each age group in quiet (left panel) and in noise (right panel). Proportions of /ae/ responses (y axis) are shown as a function of the duration cue continuum step (x axis). Steeper psychometric functions indicate greater perceptual weighting of the cue. Data from young adults are represented by light blue triangles, and data from older adults are represented by dark blue circles. Error bars represent ±1 standard error of the mean.

Fig. 3.

Psychometric functions of duration perceptual cue weights for each age group in quiet (left panel) and in noise (right panel). Proportions of /ae/ responses (y axis) are shown as a function of the duration cue continuum step (x axis). Steeper psychometric functions indicate greater perceptual weighting of the cue. Data from young adults are represented by light blue triangles, and data from older adults are represented by dark blue circles. Error bars represent ±1 standard error of the mean.

Close modal

A significant interaction was found between age group and duration cue slopes (β = 0.18, z = 3.9, P < 0.001), and the interaction between age group and spectral quality cue slope trended toward significance (β = −0.17, z = −1.70, P = 0.09). Psychometric functions show larger differences between age groups for duration (Fig. 3) than for spectral quality (Fig. 2), consistent with the GLMM results. These findings indicate that older adults exhibited greater reliance on duration, and slightly less reliance on spectral quality, relative to young adults to categorize /ɛ/ and /æ/ in both quiet and in noise.

The GLMMs also revealed significant three-way interactions between cue slopes, age group, and listening condition (spectral quality, β = 0.08, z = 2.88, P = 0.004; duration, β = −0.04, z = −2.23, P = 0.03), indicating that the effect of listening condition on cue slopes depended on the age group. A greater change in cue slopes between the quiet and noise conditions was observed for young adults than for older adults. Figure 2 illustrates that the addition of masking noise reduced the steepness of young adults' spectral quality cue slopes to a greater degree than those of older adults. Conversely, Fig. 3 illustrates that the addition of noise increased the steepness of young adults' duration cue slopes more than those of older adults. Young adults thus increased reliance on spectral quality, and decreased reliance on duration, between the quiet and noise conditions to a greater extent than did older adults. This finding suggests that young adults were better able to alter reliance on both cues in response to the listening situation.

In this study, young and older adults performed a perceptual cue weighting task in which spectral quality and duration were orthogonally manipulated to assess participants' reliance on each cue to categorize /ɛ/ and /æ/ sounds in quiet and with noise added. The analyses indicated that participants relied primarily on spectral quality to categorize /ɛ/ and /æ/ under quiet listening conditions, consistent with prior studies in young adults that utilized these stimuli in quiet (Liu and Holt, 2015; Wu and Holt, 2022) and a previous investigation of /i/-/I/ categorization in English-speaking young adults in which vowel duration and spectral quality were also orthogonally manipulated (Kondaurova and Francis, 2008). The current study extended these findings to older adults, who also utilized spectral quality as the primary dimension for categorizing /ɛ/ and /æ/ sounds.

The addition of masking noise in the current study degraded listener's perception of spectral quality and increased perceptual reliance on duration. However, unlike methods such as vocoding, in which spectral information is particularly degraded (e.g., Winn and Litovsky, 2015; Wu and Holt, 2022), the addition of broadband noise did not result in a complete switch in reliance from the formant cue to the duration cue to categorize the vowel sounds. This study utilized a SNR of 0 dB, a difficult SNR but which is still representative of many real-world listening scenarios (Wu , 2018). These results thus suggest that background noise in everyday listening situations may decrease one's ability to use spectral cues for speech categorization, but those cues are indeed still usable for young and older adults, even in challenging listening scenarios.

Relative to young adults, older adults exhibited lower perceptual weight on spectral quality (see Fig. 2), particularly for stimuli most like “SAT,” and significantly greater perceptual weight on duration (see Fig. 3) in quiet and in noise, indicated by the interactions between age group and cue slope in the regression models. This finding is in line with the prediction that age-related changes in physiological processing impair auditory periodicity coding. Similarly, Dorman (1985) found that older listeners with and without hearing loss experienced difficulty identifying sounds contrasted by formant onset frequency relative to young adults. They observed no differences between age groups in identifying stimuli in which frication noise duration or formant transition duration varied. In addition, compared to young adults, Shen (2016) observed poorer tracking of dynamic pitch in vowels in older adults and Souza (2015) found that older listeners with hearing loss relied more on temporal cues than on static spectral cues. Age-related changes in the auditory system thus seem to particularly decrease perception of spectral cues for speech categorization. In addition, the use of a cue-trading paradigm in the current study enabled observation of older adults' ability to place greater reliance on an alternative cue, duration, when perception of the spectral cue may be decreased.

Although it is well-known that older adults exhibit auditory processing deficits in the temporal domain, neural timing also encodes spectral information (e.g., Anderson , 2011). Fine spectral cues such as formant frequency differences between vowels require more precise temporal coding than do cues such as duration differences between vowels. Therefore, while temporal cues are indeed affected by neural encoding deficits, age-related loss of temporal coding fidelity likely affects dynamic spectral cue perception to a greater extent than cues involving duration. The results of the current study add support to this theory.

The findings from this research are somewhat in contrast with those from Toscano and Lansing (2019), who found greater perceptual weight with age for a spectral cue, fundamental frequency (F0), to categorize /b/-/p/ contrasts. However, the “older” group in that study primarily consisted of adults 40–50 years old. The current study tested adults aged 55 years and older, with an average age of 62 years. It is possible that the aging process does not impact spectral cue perception until after the age of 50 years. As the current study tested only young (aged 18–30 years) and older (aged 55+ years) adults, testing middle-aged participants in future work would provide further insight into the trajectory of changes in acoustic-phonetic cue perception with age.

Finally, although participants in the current study reported no hearing loss, it is likely that the older listeners had at least mild loss. A limitation of online studies is that participants' hearing thresholds cannot be verified and/or compared to performance on the task. It is therefore unknown to what extent hearing loss, neural changes, or other physiological factors drove the results of this study. Future research should examine the separate and interactive influences of neural aging and hearing loss on the ability to use acoustic-phonetic cues in noise. More work is needed to investigate age-related changes in processes that support accurate speech perception, particularly in challenging listening environments.

This work was funded by an Emerging Research Grant from the Hearing Health Foundation. The author would like to thank Barbara Shinn-Cunningham and Lori Holt for their guidance, support, and use of lab resources during this study.

The author has no conflicts to disclose.

This research was approved by the Carnegie Mellon University Institutional Review Board. Informed consent was obtained for all participants prior to their involvement in the study.

The data from this study are openly available at in the Dryad repository at https://doi.org/10.5061/dryad.brv15dvh0.

1.
Anderson
,
S.
,
Parbery-Clark
,
A.
,
Yi
,
H.-G.
, and
Kraus
,
N.
(
2011
). “
A neural basis of speech-in-noise perception in older adults
,”
Ear Hear.
32
(
6
),
750
757
.
2.
Anwyl-Irvine
,
A. L.
,
Massonnié
,
J.
,
Flitton
,
A.
,
Kirkham
,
N.
, and
Evershed
,
J. K.
(
2020
). “
Gorilla in our midst: An online behavioral experiment builder
,”
Behav. Res.
52
(
1
),
388
407
.
3.
Bates
,
D.
,
Mächler
,
M.
,
Bolker
,
B.
, and
Walker
,
S.
(
2015
). “
Fitting linear mixed-effects models using lme4
,”
J. Stat. Softw.
67
,
1
48
.
4.
Boersma
,
P.
, and
Weenink
,
D.
(
2022
). “
Praat: Doing phonetics by computer (version 6.2.05) [computer program]
,” http://www.praat.org/ (Last viewed January 5, 2022).
5.
DiNino
,
M.
,
Arenberg
,
J. G.
,
Duchen
,
A. L. R.
, and
Winn
,
M. B.
(
2020
). “
Effects of age and cochlear implantation on spectrally cued speech categorization
,”
J. Speech. Lang. Hear. Res.
63
(
7
),
2425
2440
.
6.
Dorman
,
M. F.
,
Marton
,
K.
,
Hannley
,
M. T.
, and
Lindholm
,
J. M.
(
1985
). “
Phonetic identification by elderly normal and hearing-impaired listeners
,”
J. Acoust. Soc. Am.
77
(
2
),
664
670
.
7.
Fitzgibbons
,
P. J.
, and
Gordon‐Salant
,
S.
(
1995
). “
Age effects on duration discrimination with simple and complex stimuli
,”
J. Acoust. Soc. Am.
98
(
6
),
3140
3145
.
8.
Gorilla
(
2024
). “
Gorilla Experiment Builder [computer program]
,” www.gorilla.sc (Last viewed February 2024).
9.
Hillenbrand
,
J. M.
,
Clark
,
M. J.
, and
Houde
,
R. A.
(
2000
). “
Some effects of duration on vowel recognition
,”
J. Acoust. Soc. Am.
108
(
6
),
3013
3022
.
10.
Kondaurova
,
M. V.
, and
Francis
,
A. L.
(
2008
). “
The relationship between native allophonic experience with vowel duration and perception of the English tense∕lax vowel contrast by Spanish and Russian listeners
,”
J. Acoust. Soc. Am.
124
(
6
),
3959
3971
.
11.
Liu
,
R.
, and
Holt
,
L. L.
(
2015
). “
Dimension-based statistical learning of vowels
,”
J. Exp. Psychol. Hum. Percept. Perform.
41
(
6
),
1783
1798
.
12.
MathWorks, Inc.
(
2021
). “
MATLAB version: 9.11.0 (R2021b) [computer program]
,” https://www.mathworks.com (Last viewed June 2022).
13.
Miller
,
G. A.
, and
Nicely
,
P. E.
(
1955
). “
An analysis of perceptual confusions among some English consonants
,”
J. Acoust. Soc. Am.
27
(
2
),
338
352
.
14.
Milne
,
A. E.
,
Bianco
,
R.
,
Poole
,
K. C.
,
Zhao
,
S.
,
Oxenham
,
A. J.
,
Billig
,
A. J.
, and
Chait
,
M.
(
2021
). “
An online headphone screening test based on dichotic pitch
,”
Behav. Res.
53
(
4
),
1551
1562
.
15.
Parikh
,
G.
, and
Loizou
,
P. C.
(
2005
). “
The influence of noise on vowel and consonant cues
,”
J. Acoust. Soc. Am.
118
(
6
),
3874
3888
.
16.
Parthasarathy
,
A.
, and
Kujawa
,
S. G.
(
2018
). “
Synaptopathy in the aging cochlea: Characterizing early-neural deficits in auditory temporal envelope processing
,”
J. Neurosci.
38
(
32
),
7108
7119
.
17.
Prolific
(
2024
). “
Prolific
,” www.prolific.com (Last viewed September 2022).
18.
R Core Team
(
2021
). “
R: A language and environment for statistical computing [computer program]
,” https://www.R-project.org/ (Last viewed February 2024).
19.
Scharenborg
,
O.
,
Weber
,
A.
, and
Janse
,
E.
(
2015
). “
Age and hearing loss and the use of acoustic cues in fricative categorization
,”
J. Acoust. Soc. Am.
138
(
3
),
1408
1417
.
20.
Shen
,
J.
,
Wright
,
R.
, and
Souza
,
P. E.
(
2016
). “
On older listeners' ability to perceive dynamic pitch
,”
J. Speech Lang. Hear. Res.
59
(
3
),
572
582
.
21.
Souza
,
P. E.
,
Wright
,
R. A.
,
Blackburn
,
M. C.
,
Tatman
,
R.
, and
Gallun
,
F. J.
(
2015
). “
Individual sensitivity to spectral and temporal cues in listeners with hearing impairment
,”
J. Speech Lang. Hear. Res.
58
(
2
),
520
534
.
22.
Souza
,
P.
,
Wright
,
R.
,
Gallun
,
F.
, and
Reinhart
,
P.
(
2018
). “
Reliability and repeatability of the speech cue profile
,”
J. Speech Lang. Hear. Res.
61
(
8
),
2126
2137
.
23.
Stevens
,
K. N.
(
1980
). “
Acoustic correlates of some phonetic categories
,”
J. Acoust. Soc. Am.
68
(
3
),
836
842
.
24.
Swanepoel
,
R.
,
Oosthuizen
,
D. J. J.
, and
Hanekom
,
J. J.
(
2012
). “
The relative importance of spectral cues for vowel recognition in severe noise
,”
J. Acoust. Soc. Am.
132
(
4
),
2652
2662
.
25.
Toscano
,
J. C.
, and
Lansing
,
C. R.
(
2019
). “
Age-related changes in temporal and spectral cue weights in speech
,”
Lang. Speech
62
(
1
),
61
79
.
26.
Tremblay
,
K. L.
,
Piskosz
,
M.
, and
Souza
,
P.
(
2002
). “
Aging alters the neural representation of speech cues
,”
Neuroreport
13
(
15
),
1865
1870
.
27.
Vermeire
,
K.
,
Knoop
,
A.
,
De Sloovere
,
M.
,
Bosch
,
P.
, and
van den Noort
,
M.
(
2019
). “
Relationship between working memory and speech-in-noise recognition in young and older adult listeners with age-appropriate hearing
,”
J. Speech Lang. Hear. Res.
62
(
9
),
3545
3553
.
28.
Vongpaisal
,
T.
, and
Pichora-Fuller
,
M. K.
(
2007
). “
Effect of age on F0 difference limen and concurrent vowel identification
,”
J. Speech Lang. Hear. Res.
50
(
5
),
1139
1156
.
29.
Wang
,
M. D.
, and
Bilger
,
R. C.
(
1973
). “
Consonant confusions in noise: A study of perceptual features
,”
J. Acoust. Soc. Am.
54
(
5
),
1248
1266
.
30.
Winn
,
M.
(
2024
). “
Praat scripts [computer program]
,” http://www.mattwinn.com/praat.html#makeSSN (Last viewed June 2022).
31.
Winn
,
M. B.
,
Chatterjee
,
M.
, and
Idsardi
,
W. J.
(
2012
). “
The use of acoustic cues for phonetic identification: Effects of spectral degradation and electric hearing
,”
J. Acoust. Soc. Am.
131
(
2
),
1465
1479
.
32.
Winn
,
M. B.
,
Chatterjee
,
M.
, and
Idsardi
,
W. J.
(
2013
). “
Roles of voice onset time and F0 in stop consonant voicing perception: Effects of masking noise and low-pass filtering
,”
J. Speech Lang. Hear. Res.
56
(
4
),
1097
1107
.
33.
Winn
,
M. B.
, and
Litovsky
,
R. Y.
(
2015
). “
Using speech sounds to test functional spectral resolution in listeners with cochlear implants
,”
J. Acoust. Soc. Am.
137
(
3
),
1430
1442
.
34.
Woods
,
D. L.
,
Yund
,
E. W.
,
Herron
,
T. J.
, and
Ua Cruadhlaoich
,
M. A. I.
(
2010
). “
Consonant identification in consonant-vowel-consonant syllables in speech-spectrum noise
,”
J. Acoust. Soc. Am.
127
(
3
),
1609
1623
.
35.
Wu
,
Y. C.
, and
Holt
,
L. L.
(
2022
). “
Phonetic category activation predicts the direction and magnitude of perceptual adaptation to accented speech
,”
J. Exp. Psychol. Hum. Percept. Perform.
48
(
9
),
913
925
.
36.
Wu
,
Y.-H.
,
Stangl
,
E.
,
Chipara
,
O.
,
Hasan
,
S. S.
,
Welhaven
,
A.
, and
Oleson
,
J.
(
2018
). “
Characteristics of real-world signal-to-noise ratios and speech listening situations of older adults with mild-to-moderate hearing loss
,”
Ear Hear.
39
(
2
),
293
304
.