Children with sensorineural hearing loss show considerable variability in spoken language outcomes. The present study tested whether specific deficits in supra-threshold auditory perception might contribute to this variability. In a previous study by Halliday, Rosen, Tuomainen, and Calcus [(2019). J. Acoust. Soc. Am. 146, 4299], children with mild-to-moderate sensorineural hearing loss (MMHL) were shown to perform more poorly than those with normal hearing (NH) on measures designed to assess sensitivity to the temporal fine structure (TFS; the rapid oscillations in the amplitude of narrowband signals over short time intervals). However, they performed within normal limits on measures assessing sensitivity to the envelope (E; the slow fluctuations in the overall amplitude). Here, individual differences in unaided sensitivity to the TFS accounted for significant variance in the spoken language abilities of children with MMHL after controlling for nonverbal intelligence quotient, family history of language difficulties, and hearing loss severity. Aided sensitivity to the TFS and E cues was equally important for children with MMHL, whereas for children with NH, E cues were more important. These findings suggest that deficits in TFS perception may contribute to the variability in spoken language outcomes in children with sensorineural hearing loss.

Auditory perception plays a fundamental role in language development. The acoustic components of speech are known to convey important linguistic information. Like any complex auditory signal, speech signals are decomposed by the auditory system into an array of overlapping frequency bands. The resulting narrowband signals are decomposed further into at least two temporal fluctuation rates (Poeppel et al., 2008; Rosen, 1992). The envelope (E) comprises the slow oscillations (2–50 Hz) in the overall amplitude of a narrowband auditory signal and is evident in the acoustic properties of intensity, amplitude modulation (AM), and the rise (onset) and fall (offset) times of sounds (Rosen, 1992). In contrast, temporal fine structure (TFS) comprises the rapid oscilliations (0.6–10 kHz) in the amplitude of a narrowband signal over short time intervals (<1 s) and carries information about the frequency content of a sound, including the formant spectra of speech (Rosen, 1992; Smith et al., 2002). For those with normal hearing (NH), the E has been argued to play a crucial role in the comprehension of speech in quiet (Drullman, 1995; Shannon et al., 1995; Smith et al., 2002; Xu et al., 2017; Zeng et al., 2004). In turn, sensitivity to E cues has been proposed to contribute to language development in children with NH (Goswami, 2019). Indeed, such is the importance of E cues that children with severe-to-profound sensorineural hearing loss who wear cochlear implants—which provide poor access to TFS cues—can still acquire oral language (Tomblin et al., 1999). However, for children with mild-to-moderate sensorineural hearing loss (MMHL) who typically wear hearing aids and not cochlear implants, the perception of the acoustic cues of speech is also likely to be degraded, albeit to a lesser extent. The current study asked whether the auditory perception of TFS and E cues was associated with language development in children with MMHL compared to those with NH.

The role of E cues in the acquisition of phonological representations and learning to read has long been argued for children with NH (e.g., Goswami et al., 2002). For example, children with dyslexia have been shown to perform more poorly than normal readers on tasks assessing sensitivity to the sound E, including AM detection, rise time (RT) discrimination, and rhythm perception (for a review, see Goswami, 2011), as well as neural correlates of E encoding (De Vos et al., 2017; Hämäläinen et al., 2008; Power et al., 2016). Moreover, individual differences in sensitivity to these acoustic features have been shown to be predictive of concurrent and longitudinal reading abilities (Goswami et al., 2002; cf. Rosen, 2003). However, more recently it has been argued that sensitivity to E cues may also play a role in the acquisition of spoken language (for review, see Goswami, 2019). Consistent with this view, deficits in sensitivity to RT, sound duration, and rhythm perception have been found in children with specific language impairment (SLI; now known as developmental language disorder or DLD; Corriveau et al., 2007; Corriveau and Goswami, 2009). Recently, sensitivity to RT at 7 and 10 months of age was shown to be predicted by vocabulary but not predicted by phonological processing skills at 3 years of age (Kalashnikova et al., 2019).

In contrast to the literature on children with NH, the role of auditory perception in the language development of children with sensorineural hearing loss has received somewhat less attention. This is perhaps surprising because we have known for many years that sensorineural hearing loss is associated with abnormal performance on psychoacoustic tasks (for a review, see Moore, 2007). For example, individuals with sensorineural hearing loss have been shown to exhibit poorer frequency selectivity (i.e., a reduced ability to resolve the spectral components of a complex sound) as a result of a broadening of auditory filters (Peters and Moore, 1992; Rance et al., 2004). In addition, sensorineural hearing loss has been linked to reduced sensitivity to TFS, evidenced by the poorer performance of both adults and children with MMHL on tasks such as frequency discrimination (FD), fundamental frequency (F0) discrimination, and frequency modulation detection (Halliday and Bishop, 2006; Henry and Heinz, 2013; Moore, 2014; Rance et al., 2004). However, sensorineural hearing loss appears to leave E processing relatively intact as demonstrated by the normal or enhanced performance of adults and children with MMHL on tasks such as AM detection (e.g., Rance et al., 2004; Wallaert et al., 2017).

There is increasing evidence that these changes in auditory perception may contribute to the poorer speech discrimination abilities of individuals with sensorineural hearing loss. In hearing-aid users, positive correlations between frequency selectivity and speech perception have been found (Davies-Venn et al., 2015; Dreschler and Plomp, 1985; Henry et al., 2005), although not consistently (Hopkins and Moore, 2011; Rance et al., 2004; Summers et al., 2013; Ter Keurs et al., 1993). More consistent have been reports of correlations between measures of TFS perception and speech perception in quiet and noise, which have been demonstrated in both children and adults with MMHL (adults, Hopkins and Moore, 2011; Johannesen et al., 2016; Mehraei et al., 2014; Papakonstantinou et al., 2011; Summers et al., 2013; children, Rance et al., 2004). Importantly, impaired sensitivity to TFS has been argued to play a critical role in the speech-in-noise perception difficulties of adults with sensorineural hearing loss by interfering with their ability to “listen in the dips” of the background noise (Hopkins et al., 2008; Lorenzi et al., 2006; Swaminathan and Heinz, 2012). Given the role of speech perception in the acquisition of spoken language (Tsao et al., 2004), individual variability in TFS processing may contribute to the variable language outcomes seen in children with sensorineural hearing loss.

Several large-scale studies have assessed the speech and language development of children with sensorineural hearing loss in recent years. A consistent finding from these studies is that of a large degree of variability in the spoken language outcomes of these children. A number of demographic factors have been identified that appear to contribute to this variability, including severity of hearing loss (Ching et al., 2013; Tomblin et al., 2015; Wake et al., 2004; Wake et al., 2005), age of detection and/or age of first fitting of cochlear implants or hearing aids (Ching et al., 2013; Wake et al., 2005; Yoshinaga-Itano et al., 1998), and hearing device audibility, quality, and use (McCreery et al., 2015; Tomblin et al., 2014; Tomblin et al., 2015). In addition, some studies have suggested a possible role for genetic predisposition to co-occurring language disorders in those children with sensorineural hearing loss who show particular weaknesses in language acquisition (Gilbertson and Kamhi, 1995; Halliday et al., 2017a). However, a key finding is that these factors do not appear to fully account for the extent of variability in language outcomes experienced by this group. To our knowledge, the possibility that specific deficits in auditory perception might contribute to this variability has not yet been examined.

A series of previous studies assessed the auditory perceptual and language abilities of 46 8-16-year-old children with MMHL and 44 age-matched NH controls (Halliday et al., 2019; Halliday et al., 2017a, 2017b). Auditory psychophysical thresholds were obtained on a battery of tasks, including those designed to assess sensitivity to TFS (FD and detection of modulations in the F0), and E (RT discrimination and AM detection) of simple and complex sounds. To assess the mediating role of amplification on auditory perception, children with MMHL were tested both while they were wearing their hearing aids and while they were not. For both hearing-aid conditions, the MMHL group performed more poorly than the NH controls on the two psychophysical tasks designed to measure sensitivity to TFS (Halliday et al., 2019). However, performance on the two measures of E processing did not differ between groups. The same children with MMHL also showed poorer and more variable performance than the controls on a variety of measures of spoken language but not reading (Halliday et al., 2017a). However, to date, the relationship between sensitivity to E and TFS cues and individual differences in language abilities, both spoken and reading, has not been assessed.

The current study examined whether performance on these behavioural measures of TFS and E processing was linked to the spoken or written language abilities of these same groups of children with MMHL and NH controls. Based on previous findings for children (Rance et al., 2004) and adults (e.g., Lorenzi et al., 2006) with sensorineural hearing loss, it was predicted that unaided sensitivity to TFS would correlate with and significantly account for a proportion of the variance in the spoken language (but not reading) abilities of children with MMHL. Based on evidence from children with NH (Goswami, 2019), it was hypothesized that sensitivity to E cues would play a greater role in the spoken language and reading abilities of the controls. Finally, this study also examined whether aided sensitivity to TFS or E cues would be more important in accounting for individual differences in the language abilities of children with MMHL. Because hearing aids increase the audibility of important components of speech, one possibility was that the relationship between aided thresholds and language would be similar to that of the NH controls. Alternatively, because the MMHL group still showed deficits in sensitivity to TFS cues even when they were wearing their hearing aids (Halliday et al., 2019), it was possible that the relationship between aided thresholds and language would be the same as for the unaided condition.

Audiometric, psychophysical, and psychometric testing took place at University College London (UCL) over two sessions, each lasting around 90 minutes, and separated by at least a week. Each child was tested by a single experimenter. Audiometric and psychophysical testing was conducted in a sound-attenuated booth, whereas psychometric testing was conducted in an adjacent quiet room. The parents/guardians of all participants completed an in-house questionnaire concerning their child's demographic, developmental, and medical background. The project received ethical approval from the UCL Research Ethics Committee and informed written consent was obtained from the parent/guardian of each child.

Forty-six children with MMHL [27 boys, 19 girls; mild-to-moderate hearing loss (MM) group] and 44 age-matched NH controls (19 boys; 25 girls; NH group) participated in this study (see Table I). Children were aged 8–16 years old at the time of testing, and children in the NH group were age-matched to within 6 months to at least one child in the MM group. All children were from monolingual English-speaking backgrounds and all communicated solely via the oral/aural modality (i.e., they did not use sign language as is typical for children with MMHL). A nonverbal intelligence quotient (IQ) was measured for all participants using the block design subtest of the Wechsler Abbreviated Scale of Intelligence (WASI; Wechsler, 1999). All had nonverbal IQ scores within the normal range (IQ-equivalent standard scores of ≥85, equivalent to T-scores ≥40), although scores were significantly higher for the NH group than for the MM group (see Table I). Maternal education level (age in years at which the mother left full-time education) was used as a proxy for socioeconomic status and did not differ significantly between groups. Finally, family history of language difficulties was scored bimodally as either having or not having a first-degree relative (parent or sibling) with a childhood history of spoken or written language difficulties unrelated to a hearing loss. Family history of language difficulties did not differ between groups.

TABLE I.

Mean (SD) and ratio participant characteristics for the NH and MM groups and between-groups comparisons. NH, normally hearing group; MM, mild-to-moderate hearing loss group; OR, odds ratio; CI, confidence interval; BEPTA, better-ear pure-tone average. Parametric tests were two-sample Welsh t-tests; non-parametric tests were Fisher's exact test. Significant parameters (p < 0.05) appear in bold.

VariableNH (N = 44)MM (N = 46)tDegree of Freedom (df)pr/OR95% CI
Age (yr) 11.54 (2.05) 11.44 (2.16) 0.23 88 0.821 0.02 −0.78, 0.98 
BEPTA thresholds [dB hearing level (HL)] 7.33 (3.95) 43.37 (12.01) −19.28 55 <0.001 0.93 −39.79, −32.30 
Maternal education (yr) 20.47 (2.89) 19.33 (2.65) 1.88 83 0.063 0.20 −0.06, 2.33 
Nonverbal IQ (T-score) 60.64 (8.48) 55.63 (8.71) 2.76 88 0.007 0.28 1.40, 8.61 
Family history (0:1) 35:9 35:11 — 0.802 1.22 0.45, 3.32 
VariableNH (N = 44)MM (N = 46)tDegree of Freedom (df)pr/OR95% CI
Age (yr) 11.54 (2.05) 11.44 (2.16) 0.23 88 0.821 0.02 −0.78, 0.98 
BEPTA thresholds [dB hearing level (HL)] 7.33 (3.95) 43.37 (12.01) −19.28 55 <0.001 0.93 −39.79, −32.30 
Maternal education (yr) 20.47 (2.89) 19.33 (2.65) 1.88 83 0.063 0.20 −0.06, 2.33 
Nonverbal IQ (T-score) 60.64 (8.48) 55.63 (8.71) 2.76 88 0.007 0.28 1.40, 8.61 
Family history (0:1) 35:9 35:11 — 0.802 1.22 0.45, 3.32 

Unaided pure-tone air-conduction thresholds were obtained for both ears for all children using an Interacoustics AC33 audiometer with Telephonics TDH-39 headphones (see Fig. 1). For the MM group, 19 children were identified as having mild hearing loss and 27 were identified as having moderate hearing loss, where mild was defined as a better-ear pure-tone-average (BEPTA) audiometric threshold of 21–40 dB hearing level (HL) across octave frequencies 0.25–4 kHz, and moderate was defined as a BEPTA threshold of 41–70 dB HL (British Society of Audiology, 2011). Children with NH had mean audiometric thresholds of ≤20 dB HL across the octave frequencies for both ears and thresholds of ≤25 dB HL at any particular frequency. For the MM group, age of detection of hearing loss ranged from 2 months old to 14 years old (median = 57 months old), although in all cases, the hearing loss was thought to be congenital and could not be attributed to a syndrome or neurological impairment (including auditory neuropathy spectrum disorder) or any known postnatal event (e.g., measles). Forty-three people in the MM group were fitted with bilateral prescription hearing aids, although one child was refusing to wear their aids. The age of first hearing aid fitting ranged from 3 months old to 15 years old (median = 65 months old).

FIG. 1.

(Color online) Individual (thin blue lines) and mean (thick blue lines) air-conduction pure-tone audiometric thresholds for the MM group for the left and right ears. Mean thresholds for the NH group are also shown (thick grey line), along with the range for the NH group (shaded grey area).

FIG. 1.

(Color online) Individual (thin blue lines) and mean (thick blue lines) air-conduction pure-tone audiometric thresholds for the MM group for the left and right ears. Mean thresholds for the NH group are also shown (thick grey line), along with the range for the NH group (shaded grey area).

Close modal

Auditory processing was assessed using four psychophysical tasks. TFS is thought to carry information about both the frequency of sinusoidal stimuli and the F0 of complex stimuli for carriers below 4–5 kHz (Hopkins et al., 2008; Moore and Ernst, 2012). Therefore, sensitivity to the TFS was assessed using a FD task for a 1-kHz sinusoid and a F0 modulation detection task for a complex harmonic sound (Moore and Ernst, 2012; Moore and Gockel, 2011). In contrast, the E carries information about the slow fluctuations (between 2 and 50 Hz) in the amplitude of an auditory signal. Thus, sensitivity to E cues was assessed using a RT discrimination task for a 1-kHz sinusoid and a slow-rate (2-Hz) amplitude modulation detection (AMD) task for a complex harmonic sound.

1. Stimuli

For each task, a continuum of stimuli was created, ranging from a fixed, repeated standard sound to a maximum, variable, deviant sound. All stimuli were 500 ms in duration and root-mean-square (rms)-normalised for intensity. All were ramped on and off with a 15-ms linear ramp, apart from the RT task (see below).

For the FD task, the target sounds were generated with frequency differences spaced in the ratio of 1/2 downward from a starting point of 1.5 kHz. The detection of the modulation in F0 (F0 task) was assessed using a complex harmonic carrier generated by passing a waveform containing 50 equal-amplitude harmonics (at a F0 of 100 Hz) through three simple resonators. The resonators were centred at 500, 1500, and 2500 Hz with a 100 Hz-bandwidth. The F0 was modulated at 4 Hz. For target stimuli, the depth of modulation varied from ±0.16 to ±16 Hz in logarithmic steps.

For the RT task, the on-ramp of the target sounds ranged logarithmically from 15 ms (the standard) to 435 ms (the maximal deviant) across 100 stimuli, whereas off-ramps were fixed at 50 ms. For the AMD task, the standard stimulus was unmodulated and identical to that used in the F0 task. Deviant stimuli for this task were amplitude modulated at a rate of 2 Hz with the modulation depth ranging from 80% to 5% across 100 stimuli in logarithmic steps.

Stimuli were presented free-field in a sound-attenuating booth at a fixed sound pressure level of 70 dB SPL via a single speaker that was positioned facing the child approximately 1 m away from their head.

2. Psychophysical procedure

The auditory processing tasks were delivered in a computer-game format and responses were recorded via a touch screen. A three-interval, three-alternative forced-choice (3I-3AFC) procedure was used. On each trial, participants were presented with three sounds, each represented on the screen by a different cartoon character and separated by a silent 500-ms inter-stimulus interval. Two of the sounds were the same (standard) sound and one sound was a different (deviant) sound. Children were instructed to select the “odd-one-out” by pressing the character that “made the different sound.” For all tasks, an initial one-down, one-up rule was used to adapt the task difficulty until the first reversal. Subsequently, a three-down one-up procedure was used, targeting 79.4% correct on the psychometric function (Levitt, 1971). The step size decreased over the first three reversals and then remained constant.

For the FD task, the frequency difference between the standard and the deviant was initially 50% (i.e., 1 kHz vs 1.5 kHz). The initial step size was equivalent to a factor of 0.5, reduced to 1/2 after the first reversal. For the F0 task, the difference in modulation depth of the F0 between the standard and the deviant was initially ±16 Hz. The step size was initially 12 steps along the continuum, which reduced to 4 steps after the first reversal. For the RT task, the difference in RT between the standard and deviant was initially 420 ms. The initial step size was 12 steps along the continuum, reducing to 6 steps after the first reversal. Finally, for the AMD task, the initial difference in AM depth was 80%. The initial step size was 21 stimulus steps along the continuum, reducing to 7 stimulus steps after the first reversal.

For all tasks, tracks terminated after 50 trials or 4 reversals had been achieved (whichever came first). Children were required to repeat a run if their threshold was at the ceiling (0.3% of runs for the NH group, 2.1% for the MM group) or if they had achieved fewer than four reversals at the final step size (1.1% of runs for the NH group, 0.9% for the MM group). In these cases, the repeated run was used to estimate the threshold. Participants were given unlimited time to respond and visual feedback was provided after each response. Participants undertook a minimum of five practice trials for each task, in which they were asked to discriminate between the endpoints of each continuum (i.e., the easiest discrimination). Participants were required to achieve a ratio of at least 4/5 correct practice trials before testing began with a maximum of 15 practice trials per task.

Each child completed two runs per task, separated across two sessions. For the children with MMHL who wore hearing aids, one run was completed while they were wearing their hearing aids (aided condition) and another run was completed when they were not (unaided condition). Hearing aids were set to the children's usual settings for aided testing. The order of the tasks and conditions was counterbalanced between children.

3. Threshold calculations and auditory composite thresholds

For each task, thresholds were calculated as the mean value of the target stimulus at the last four reversals for each adaptive track, equivalent to the geometric mean. Psychophysical thresholds were log-transformed (base 10) to normalise the data. Normalised thresholds for children with MMHL were then age-transformed against the thresholds of the NH group to provide an age-standardised threshold [M = 0; standard deviation (SD) = 1]. Sensitivity to TFS and E was calculated separately for the MM and NH groups as the arithmetic mean age-standardised thresholds for the FD and F0 tasks (TFS composite) and the RT and AMD tasks (E composite), respectively. Composite thresholds were calculated for both aided and unaided conditions for children with MMHL who wore hearing aids (n = 42). For each composite threshold, a higher number corresponded to a poorer performance.

Language abilities were assessed using a battery of seven standardised psychometric tests, the majority of which had been recently standardised using United Kingdom (UK) norms (the exception being repetition of nonsense words; see below). Children with MMHL who normally wore hearing aids did so during psychometric testing using their standard hearing aid settings. For all tests except repetition of nonsense words (see below), scores were converted to z scores (M = 0, SD = 1) based on the age-normed standardised scores of each individual test. Spoken language skills were assessed using receptive and expressive vocabulary tests, receptive and expressive grammar tests, as well as a test evaluating phonological processing and memory. Reading skills were assessed using word reading and pseudoword decoding tests.

1. Standardised language tests

Spoken language receptive vocabulary was assessed using the British Picture Vocabulary Scale (BPVS third edition; Dunn and Dunn, 2009). For this test, children were presented with four pictures on each trial and required to select the one that best illustrated the meaning of a word said by the experimenter. Expressive vocabulary was assessed using the expressive vocabulary (for children aged 8–9 years old) and word definitions (for children aged ≥10 years old) subtests of the Clinical Evaluation of Language Fundamentals (CELF) 4th UK edition (Semel et al., 2006. For the expressive vocabulary subtest, children were shown a series of pictures and for each one, they were asked to say a word that best corresponded to the picture. For the word definitions subtest, the experimenter would say a word and then use that word in a sentence. Children were required to define each target word.

Receptive grammar was assessed using a computerized version of the Test for the Reception of Grammar (TROG; Bishop, 2003), which assesses the understanding of 20 different grammatical contrasts. On each trial, children were presented with four pictures and a sentence that was spoken by a female native Southern British English speaker via the speaker of a laptop. The task was to select the picture that best depicted the spoken target sentence from the remaining three foil pictures that represented sentences that were altered in grammatical/lexical structure. Expressive grammar was assessed using the recalling sentences subtest of the CELF (Semel et al., 2006). For this test, sentences of increasing length and complexity were spoken by a different female native Southern British English speaker and presented via the laptop speaker. Children were asked to repeat back each sentence verbatim.

Phonological processing and memory were assessed using the repetition of nonsense words subtest from the neuropsychological assessment NEPSY (Korkman et al., 1998). The 13 original nonword items from this subtest were re-recorded by a female native speaker of Southern British English and presented via a computer at a comfortable listening level. Nonwords ranged from two to five syllables in length, and the child's task was to repeat each nonword out loud. Responses were recorded and marked offline. Because the norms for the NEPSY only go up to 12 years, 11 months of age, z scores were calculated for this test from the age-normed scores for the NH group.

Reading abilities were assessed using the word reading and pseudoword decoding subtests of the Wechsler Individual Achievement Test (WIAT, Wechsler, 2005). For both tests, children were presented with a series of written words or pseudowords and asked to read them out loud as accurately as possible in their own time.

2. Language composite scores

Scores on the spoken language and reading individual tests were combined to form two composite language measures: a spoken language composite measure and a reading composite measure. The spoken language composite measure was calculated as the mean age-standardised score for each child based on the z scores obtained for the five different spoken language tests of receptive and expressive vocabulary, receptive and expressive grammar, and phonological processing and memory. The reading composite measure was calculated as the mean standardised score for each child based on the z scores obtained for the two reading tests. Each composite score was, thus, equivalent to the mean age-standardised score for each child across the spoken language and reading measures, expressed as a z score (M = 0; SD = 1).

It was not possible to obtain a pure-tone average threshold for one child in the NH group as a result of poor compliance with the test protocol. For this child, a screening procedure confirmed normal hearing, and the child's audiometric thresholds were not included in the study. One child with MMHL was unable to complete the auditory processing tasks in the unaided condition. Thresholds for this child were, therefore, included for the aided condition only. Thresholds on the RT task were not obtained for six children with MMHL in the unaided condition and one child in the aided condition due to failure to pass the practice trials and/or fewer than four reversals being achieved at the final step size. RT thresholds for these children were, for that reason, not included, and composite E thresholds were calculated from the AMD task only. Questionnaire data recording the age at which the mother left full-time education were missing for five participants (four MM, one NH). All missing data were examined and it was deemed unlikely that the data were missing at random. Therefore, missing data were not replaced.

Data were analysed using linear mixed models because of missing data in some conditions. Analyses were conducted using RStudio version 1.2.1578 (RStudio Team, 2019) and R version 3.6.1 (R Core Team, 2019). Utilized packages included LME4 (Bates et al., 2012) and ggplot2 (Wickham et al., 2016) packages.

Composite TFS and E thresholds for the NH and MM groups (unaided and aided conditions) are shown in Fig. 2. To assess whether the groups differed in their auditory processing thresholds, two linear mixed models were run, fitting unaided thresholds for the MM and NH groups (unaided condition) and aided and unaided thresholds for the MM and NH groups, respectively (aided condition). For each condition, auditory processing (TFS vs E) and group (MM vs NH), along with their interactions, were included as fixed factors and participants were included as random effects. For the unaided condition, the effects of group and auditory processing were not significant [β = 0.29, t(125.60) = 1.27, p = 0.206; and β = 1.60e-15, t(87) = 0, p > 0.999, respectively]. However, there was a significant group × auditory processing interaction [β = 1.24, t(87) = 6.20, p < 0.001]. For the aided condition, while the effect of group was not significant [β = −0.28, t(124.61) = −1.25, p = 0.212], the effect of auditory processing was [β = 0.77, t(84) = 5.37, p < 0.001] as was the group × auditory processing interaction [β = −0.77, t(84) = −3.84, p < 0.001]. In both the unaided and aided conditions, independent sample t-tests (Welsh) confirmed that the interactions were due to the MM group obtaining higher (poorer) thresholds on the TFS composite relative to the controls {unaided, t(70.20) = −6.46, 95% confidence interval (CI) [−2.0,−1.1], p < 0.001, r = 0.61; aided, t(66.24) = −4.46, 95% CI [−1.52,−0.58], p < 0.001, r = 0.48}, but not on the E composite {unaided, t(82.43) = −1.33, 95% CI [−0.73,0.14], p = 0.188, r = 0.14; aided, t(80.60) = −1.32, 95% CI [−0.70,0.14], p = 0.191, r = 0.15}.

FIG. 2.

(Color online) Performance on the TFS and E composite measures for the NH group in grey (TFS: M = 0, SD = 0.78; E: M = 0, SD = 0.89), MM aided group/condition in orange (TFS: M = 1.53, SD = 1.37; E: M = 0.29, SD = 1.16), and MM unaided group/condition in blue (TFS: M = 1.05, SD = 1.32; E: M = 0.28, SD = 1.05). Higher thresholds correspond to poorer performances. Boxplots represent the 25th, 50th, and 75th percentiles for each group/condition, whereas the violin plots illustrate the kernel probability density, i.e., the width of the violin area represents the proportion of the data located there.

FIG. 2.

(Color online) Performance on the TFS and E composite measures for the NH group in grey (TFS: M = 0, SD = 0.78; E: M = 0, SD = 0.89), MM aided group/condition in orange (TFS: M = 1.53, SD = 1.37; E: M = 0.29, SD = 1.16), and MM unaided group/condition in blue (TFS: M = 1.05, SD = 1.32; E: M = 0.28, SD = 1.05). Higher thresholds correspond to poorer performances. Boxplots represent the 25th, 50th, and 75th percentiles for each group/condition, whereas the violin plots illustrate the kernel probability density, i.e., the width of the violin area represents the proportion of the data located there.

Close modal

To assess whether the performance of the children in the MM group differed between the unaided and aided conditions, a linear mixed effects model was run with auditory processing (TFS vs E) and condition (aided vs unaided), along with their interaction as fixed factors and participants as random effects. The effect of auditory processing was significant, [β = 0.77, t(124.35) = 4.04, p < 0.001] but the effect of condition was not [β = 0.02, t(126.45) = 0.11, p = 0.914], and the condition × auditory processing interaction just missed significance [β = 0.47, t(124.35) = 1.76, p = 0.081]. Post hoc exploration (paired-samples t-tests) of the marginally nonsignificant interaction indicated that thresholds were lower (better) in the aided compared to the unaided condition for the TFS {t(40) = 2.92, 95% CI [0.16, 0.89], p = 0.006, r = 0.42} but not for E {t(40) = −0.03, 95% CI [−0.39,0.38], p = 0.977, r = 0.00} for children with MMHL who wore hearing aids.

Composite spoken language and reading scores for the NH and MM groups are shown in Fig. 3. A linear mixed model with language modality (spoken vs reading) and group (NH vs MM) plus their interaction as fixed factors and participants as random effects revealed significant effects of both language modality and group [β = −0.24, t(88) = 2.81, p = 0.006, and β = −1.12, t(120.42) = −7.55, p < 0.001, respectively], as well as a significant modality × group interaction [β = 0.70, t(88) = 5.87, p < 0.001]. Welch two-sample t-tests showed that the MM group performed more poorly than the NH group on both the spoken language and reading measures {difference for spoken scores = 1.12, 95% CI [0.82,1.43], t(80) = 7.34, p < 0.001, r = 0.63; difference for reading scores = 0.42, 95% CI [0.14,0.71], t(87) = 2.96, p = 0.004, r = 0.30}. However, paired-sample t-tests showed that whereas the NH group exhibited significantly lower scores for reading than for spoken language {difference = 0.24, 95% CI [0.08,0.40], t(43)= 2.95, p = 0.005, r = 0.41}, the MM group showed the opposite pattern {difference = −0.46, 95% CI [−0.64, −0.29], t(45) = −5.31, p < 0.001, r = 0.62}.

FIG. 3.

(Color online) Performance on the spoken language and reading composite measures for the NH group in grey (spoken: M = 0.56, SD = 0.59; reading: M = 0.32, SD = 0.63) and MM group in orange (spoken: M = −0.56, SD = 0.85; reading: M = −0.1, SD = 0.72). Higher thresholds correspond to poorer performances. Boxplots represent the 25th, 50th, and 75th percentiles for each group/condition, whereas the violin plots illustrate the kernel probability density, i.e., the width of the violin area represents the proportion of the data located there. The circles indicate outliers that were ±1.5 times the inter-quartile range (difference between the 25th and 75th percentiles).

FIG. 3.

(Color online) Performance on the spoken language and reading composite measures for the NH group in grey (spoken: M = 0.56, SD = 0.59; reading: M = 0.32, SD = 0.63) and MM group in orange (spoken: M = −0.56, SD = 0.85; reading: M = −0.1, SD = 0.72). Higher thresholds correspond to poorer performances. Boxplots represent the 25th, 50th, and 75th percentiles for each group/condition, whereas the violin plots illustrate the kernel probability density, i.e., the width of the violin area represents the proportion of the data located there. The circles indicate outliers that were ±1.5 times the inter-quartile range (difference between the 25th and 75th percentiles).

Close modal

To explore the relationship between the auditory processing and language measures, two-tailed Pearson's correlations were conducted between TFS and E composite thresholds and spoken language and reading composite scores (see Fig. 4). Correlations were examined separately for the NH and MM groups and the unaided and aided conditions for the MM group. Relationships with other known audiological (unaided BEPTA thresholds, a measure of severity of hearing loss), demographic (maternal education, a measure of socioeconomic status), and cognitive (nonverbal IQ) predictors of language were also examined. Significance levels were adjusted to control for multiple comparisons with Bonferroni-corrections applied at a family-wise level (i.e., for comparisons between auditory and language scores and between the other known predictors and language scores; for both, α = 0.004).

FIG. 4.

(Color online) Correlograms representing the correlation coefficients between the auditory processing, language, BETPA, demographic, and cognitive variables for the MMHL group (unaided and aided conditions) and the NH group. Positive correlations are displayed in blue and negative correlations are displayed in red. Color intensity and the size of the circle are proportional to the correlation coefficients. p values are shown as ***p < 0.001, **p < 0.004, *p < 0.05.

FIG. 4.

(Color online) Correlograms representing the correlation coefficients between the auditory processing, language, BETPA, demographic, and cognitive variables for the MMHL group (unaided and aided conditions) and the NH group. Positive correlations are displayed in blue and negative correlations are displayed in red. Color intensity and the size of the circle are proportional to the correlation coefficients. p values are shown as ***p < 0.001, **p < 0.004, *p < 0.05.

Close modal

For the MM group, there was a significant correlation between unaided TFS composite thresholds and spoken language composite scores {r(45) = −0.46, 95% CI [−0.66,−0.19], p = 0.002}. Lower (better) unaided TFS thresholds were associated with higher (better) spoken language scores. In addition, there was a marginally significant correlation between aided E composite thresholds and spoken language scores {r(42) = −0.43, 95% CI [−0.65,−0.15], p = 0.004} with better E thresholds being associated with better spoken language. Finally, for the MM group, a higher nonverbal IQ was associated with higher spoken language and reading scores {r(46) = 0.54, 95% CI [0.29,0.72], p < 0.001 and r(46) = 0.54, 95% CI [0.29,0.72], p < 0.001, respectively}. None of the other correlations between the auditory processing and language composite scores or between the other known predictors and language scores reached significance for the MM group after correcting for multiple comparisons.

For the NH group, a slightly different pattern was observed. After controlling for multiple comparisons, both E and TFS composite thresholds were significantly correlated with spoken language composite scores {r(44) = −0.50, 95% CI [−0.69,−0.24], p < 0.001, and r(44) = −0.43, 95% CI [−0.65,−0.16], p = 0.003, respectively}. Lower (better) auditory processing thresholds were associated with higher (better) spoken language scores. In addition, higher maternal education was significantly associated with better spoken language scores {r(43) = 0.52, 95% CI [0.26,0.71], p < 0.001}. None of the other correlations between language (spoken or reading) and auditory processing or other known predictors reached significance for the NH group after controlling for multiple comparisons.

To assess whether sensitivity to TFS or E cues contributed to the variance in spoken language and/or reading abilities over and above other known predictors of language, a series of multilevel linear models was run for the MM group (unaided and aided conditions) and NH group separately. Four generic models were used. In model 1, BEPTA thresholds, nonverbal IQ, maternal education levels, and family history of language/reading difficulties were entered into the model as fixed effects with participants as random effects. In model 2, TFS composite thresholds were added to model 1 to investigate whether TFS processing made an independent contribution to the dependent variables. In model 3, E composite thresholds were added to model 1 to investigate whether E processing made an independent contribution to the dependent variables. Finally, in model 4, both TFS and E composite thresholds were added to model 1. Analysis of variance (ANOVA) was used to determine the best fitting model for each group (MM and NH), condition (unaided and aided), and dependent variable (spoken language and reading). For each analysis, see the supplementary material1 for Table IV, summarizing model comparisons, and Figs. 5–7, representing the effect of each independent variable on spoken language scores for the best models.

Table II shows the estimates of the best fitting models for each group and condition for the spoken language composite measure. For the MM group in the unaided condition, adding TFS composite thresholds (model 2) significantly improved model 1 [likelihood-ratio test (LRT) = 10.08, p = 0.002], whereas adding E composite thresholds failed to improve either model 1 (model 3; LRT = 3.67, p = 0.056) or model 2 (model 4; LRT = 0.001, p = 0.970). As shown in Table II, for the MM group for the unaided condition, a significant amount of the variance in spoken language scores was accounted for by individual variance in nonverbal IQ, family history of language difficulties, and unaided TFS composite thresholds but not by BEPTA thresholds, maternal education levels, or E thresholds.

TABLE II.

Best fitting multilevel linear models for spoken language composite scores for the MM group for the unaided and aided conditions and the NH group. Significant parameters (p < 0.05) appear in bold.

Model/predictorsEstimateStandard error (SE)dftp
MM group-unaided      
Intercept −3.06 1.04 35 −2.94 0.006 
BEPTA 0.02 0.01 35 1.54 0.132 
Maternal education 0.03 0.04 35 0.69 0.494 
Nonverbal IQ 0.03 0.01 35 2.66 0.012 
Family history −0.56 0.25 35 −2.27 0.030 
TFS unaided (model 2) −0.28 0.09 35 −3.12 0.004 
MM group-aided      
Intercept −3.05 1.16 32 −2.65 0.013 
BEPTA 0.01 0.01 32 1.07 0.293 
Maternal education 0.00 0.05 32 0.07 0.948 
Nonverbal IQ 0.04 0.01 32 3.05 0.005 
Family history −0.65 0.28 32 −2.37 0.024 
TFS unaided (model 2)a −0.25 0.10 32 −2.41 0.022 
E aided (model 3)a −0.32 0.12 32 −2.60 0.014 
NH group      
Intercept −1.06 0.68 36 −1.55 0.130 
BEPTA −0.02 0.02 36 −0.99 0.329 
Maternal education 0.08 0.02 36 3.13 0.003 
Nonverbal IQ 0.00 0.01 36 0.47 0.639 
Family history −0.38 0.17 36 −2.18 0.036 
E (model 3) −0.25 0.08 36 −3.01 0.005 
Model/predictorsEstimateStandard error (SE)dftp
MM group-unaided      
Intercept −3.06 1.04 35 −2.94 0.006 
BEPTA 0.02 0.01 35 1.54 0.132 
Maternal education 0.03 0.04 35 0.69 0.494 
Nonverbal IQ 0.03 0.01 35 2.66 0.012 
Family history −0.56 0.25 35 −2.27 0.030 
TFS unaided (model 2) −0.28 0.09 35 −3.12 0.004 
MM group-aided      
Intercept −3.05 1.16 32 −2.65 0.013 
BEPTA 0.01 0.01 32 1.07 0.293 
Maternal education 0.00 0.05 32 0.07 0.948 
Nonverbal IQ 0.04 0.01 32 3.05 0.005 
Family history −0.65 0.28 32 −2.37 0.024 
TFS unaided (model 2)a −0.25 0.10 32 −2.41 0.022 
E aided (model 3)a −0.32 0.12 32 −2.60 0.014 
NH group      
Intercept −1.06 0.68 36 −1.55 0.130 
BEPTA −0.02 0.02 36 −0.99 0.329 
Maternal education 0.08 0.02 36 3.13 0.003 
Nonverbal IQ 0.00 0.01 36 0.47 0.639 
Family history −0.38 0.17 36 −2.18 0.036 
E (model 3) −0.25 0.08 36 −3.01 0.005 
a

Models 2 and 3 both fit the data better than model 1 does for the MM group in the aided condition but could not be distinguished from one another. For simplicity, we report the full model for model 2 (aided TFS) and the specific additional contribution made by aided E for model 3.

For the MM group for the aided condition, a slightly different pattern of results was observed for spoken language. Aided TFS thresholds (model 2) also significantly improved model 1 (LRT = 6.36, p = 0.012) but so did aided E thresholds (model 3, LRT = 7.27, p = 0.007). However, adding both aided TFS and aided E thresholds (model 4) did not significantly improve model 2 (LRT = 3.55, p = 0.059) or model 3 (LRT = 2.64, p = 0.104). For this condition, therefore, variance in spoken language scores was significantly and independently accounted for by nonverbal IQ, family history of language difficulties, and either aided TFS or aided E thresholds (but not both; see Table II).

For the NH group, the best fitting model for spoken language was model 3. Adding the TFS (model 2) did not improve the fit of model 1 (LRT = 2.77, p = 0.096), whereas adding E (model 3) did improve the fit of model 1 (LRT = 9.40, p = 0.002). Adding E to model 2 also significantly improved the fit (model 4; LRT = 7.11, p = 0.008), but adding TFS to model 3 did not improve the fit (LRT = 0.48, p = 0.487), suggesting that only E thresholds made a significant contribution to the model fit. The estimates of the final best model are shown in Table II and suggest that maternal education levels and E composite thresholds both made significant, independent contributions to the variability in spoken language scores for the NH group, whereas BEPTA thresholds, nonverbal IQ, and family history of language difficulties did not.

Finally, the estimates of the best fitting models for the reading composite measure are shown in Table III. For the MM group, adding TFS or E thresholds failed to improve model 1 for either the unaided or aided conditions. The same was true for the NH group. The final models indicated that nonverbal IQ and family history of language difficulties contributed significantly to reading scores for the MM group, whereas maternal education only contributed to reading scores in children with NH.

TABLE III.

Summary of model 1 for reading scores for the MM group for the unaided condition and the NH group. Significant parameters (p < 0.05) appear in bold.

Model/predictorsEstimateSEdftp
MM group-unaideda      
Intercept −2.64 0.94 36 −2.82 0.008 
BEPTA 0.00 0.01 36 0.21 0.833 
Maternal education 0.02 0.04 36 0.43 0.669 
Nonverbal IQ 0.04 0.01 36 3.61 0.001 
Family history −0.59 0.23 36 −2.54 0.016 
NH group      
Intercept −0.69 0.94 37 −0.73 0.472 
BEPTA −0.01 0.02 37 −0.45 0.659 
Maternal education 0.08 0.03 37 2.33 0.025 
Nonverbal IQ −0.01 0.01 37 −0.68 0.503 
Family history −0.11 0.24 37 −0.45 0.659 
Model/predictorsEstimateSEdftp
MM group-unaideda      
Intercept −2.64 0.94 36 −2.82 0.008 
BEPTA 0.00 0.01 36 0.21 0.833 
Maternal education 0.02 0.04 36 0.43 0.669 
Nonverbal IQ 0.04 0.01 36 3.61 0.001 
Family history −0.59 0.23 36 −2.54 0.016 
NH group      
Intercept −0.69 0.94 37 −0.73 0.472 
BEPTA −0.01 0.02 37 −0.45 0.659 
Maternal education 0.08 0.03 37 2.33 0.025 
Nonverbal IQ −0.01 0.01 37 −0.68 0.503 
Family history −0.11 0.24 37 −0.45 0.659 
a

The best fitting models for the MM group were similar for the unaided and aided conditions; therefore, only the final unaided model is shown here.

The primary goal of the present study was to examine whether sensitivity to the TFS or E of sounds was associated with language outcomes in children with sensorineural hearing loss. In addition, the study examined whether these relationships were the same for children with NH and children with hearing loss while they were wearing their hearing aids and while they were not. As sensorineural hearing loss is associated with reduced sensitivity to TFS but not E cues (Buss et al., 2004; Hopkins and Moore, 2011; Lorenzi et al., 2006), it was hypothesised that TFS but not E sensitivity would be associated with the spoken language (but less so reading) abilities of children with MMHL. For children with NH, it was hypothesised that sensitivity to E (but not TFS) cues would relate to both spoken language and reading abilities (Goswami, 2019; Kalashnikova et al., 2019).

Our first hypothesis was supported by data from the unaided condition in which sensitivity to TFS and E cues was measured for children with MMHL while they were not wearing their hearing aids. It is important to note that unaided BEPTA thresholds were significantly correlated with TFS thresholds, suggesting that elevated TFS thresholds were associated with worsening cochlear damage. However, the models showed that unaided TFS thresholds significantly contributed to the variance in spoken language (but not reading) scores for children with hearing loss even after BEPTA thresholds and other predictors of language had been controlled for. In contrast, unaided sensitivity to E cues did not improve the model fit for spoken language scores in this condition. Our findings therefore suggest that deficits in TFS processing may relate to poorer spoken language outcomes for children with MMHL over and above conventional measures such as unaided BEPTA thresholds. This is consistent with previous studies with adults with hearing loss showing significant correlations between speech recognition scores and frequency modulation detection at 1000 Hz when audibility (BEPTA) was statistically controlled for (Buss et al., 2004).

The direction and nature of this relationship remains to be determined. One possibility is that the unaided TFS thresholds were reflective of the extent of cochlear damage experienced by the children with MMHL. However, it is also possible that these findings demonstrate a relationship between TFS perception and language development per se in children with sensorineural hearing loss. This relationship may be direct, with reduced sensitivity to TFS leading to poorer perception of both the F0 and formants of speech, with subsequent consequences for spoken language acquisition. Indeed, speech perception is a known predictor of spoken language development both in children with NH (Tsao et al., 2004; Ziegler et al., 2005) and those with hearing loss (Blamey et al., 2001; Davidson et al., 2011). Alternatively, the relationship may be more indirect via impaired speech in noise perception. To that end, previous research in adults has shown that sensorineural hearing loss-induced deficits in sensitivity to TFS cues may limit the ability to use periods of quiet (“dips”) in background noise for accurate speech perception (Ardoint and Lorenzi, 2010; Hopkins et al., 2008; Hopkins and Moore, 2010; Lorenzi et al., 2006; Summers et al., 2013). For children with hearing loss, it is plausible that this decreased ability to listen to speech in background noise plays a specific role in hindering the acquisition of spoken language. Consistent with this idea, speech perception in noise has been shown to be particularly problematic for children with sensorineural hearing loss (Goldsworthy and Markle, 2019) and associated with vocabulary development in this group (Klein et al., 2017; McCreery et al., 2019; Walker et al., 2019). Given that much spoken language learning occurs in suboptimal, noisy environments (Dockrell and Shield, 2006), it may be that deficits in TFS perception negatively impact this process for children with hearing loss by impairing their ability to perceive speech under such conditions.

The present analyses showed a slightly different pattern of results when children with MMHL wore their hearing aids for the auditory tasks. In this aided condition, either sensitivity to the TFS or sensitivity to the E—but not both—significantly improved the model for spoken language scores after controlling for the other predictors. A possible explanation for these findings is that our results may simply reflect an improvement in the audibility of stimuli in the aided condition compared to the unaided condition. Indeed, while hearing aids would not have provided additional TFS cues, the increased sensation level is likely to have contributed to the improvement in aided TFS thresholds relative to unaided TFS thresholds in the current study (see also Wier et al., 1977). Aided audibility has been shown to significantly contribute to the speech and language outcomes of children with sensorineural hearing loss over and above other known predictors for this group (McCreery et al., 2015; McCreery et al., 2019; Tomblin et al., 2015). For instance, a recent, large cohort study indicated that variability in spoken language abilities for 8-10-year-old children with mild-to-severe sensorineural hearing loss was moderated by an interaction between BEPTA thresholds and aided HLs (Tomblin et al., 2020). Moreover, higher daily use of hearing aids has been associated with better listening comprehension but not vocabulary, reading, or morphological awareness in children with mild hearing loss aged between 9 and 11 years (Walker et al., 2020). Aided audibility was not measured in the present study so its possible relations with language for children with hearing loss cannot be assessed here. However, a relationship between aided audibility and speech perception has not consistently been found in children with sensorineural hearing loss (Klein et al., 2017), raising the possibility that other factors may also play a role.

One such factor may be that specific aspects of aided auditory perception also impact upon the spoken language development of children with sensorineural hearing loss who wear hearing aids. In this respect, the wearing of hearing aids appeared to make the results of children with MMHL more similar to those of the NH controls. For children with NH, E composite thresholds significantly contributed to the variance in spoken language abilities, whereas TFS thresholds did not. In contrast, children with MMHL in the aided condition resembled both children with NH and themselves in the unaided condition in terms of their pattern of results. Thus, it is possible that where TFS sensitivity is normal (as for children with NH), sensitivity to E cues may be related to spoken language abilities by contributing to the syllabic and prosodic (stress) representation of the speech signal (see Kalashnikova et al., 2019). However, where TFS is degraded, as is the case for children with hearing loss, this may place an upper limit on the utility of E cues in contributing to spoken language outcomes. Nevertheless, E thresholds did contribute to the variance in spoken language outcomes in the aided condition for children with hearing loss, suggesting that these cues may still play a role when TFS cues are more audible. Alternatively, it may be that those children who showed greater deficits in unaided TFS perception were able to benefit more from the enhancement of E cues in the aided condition. Further research is needed to determine whether improvements in the aided perception of TFS and E cues contribute to the better language outcomes of children with hearing loss who wear hearing aids and whether this relationship is mediated by aided audibility (see Tomblin et al., 2014; Tomblin et al., 2015; Tomblin et al., 2020).

While auditory processing skills significantly improved the models for spoken language for the different groups and conditions, this was not the case for reading, contrary to our hypothesis for the NH group. Previous studies have reported a relationship between sensitivity to E cues and reading in children with NH, particularly for those with dyslexia (Goswami, 2019; Goswami et al., 2002). The current results for children with NH showing no reading difficulties did not reveal such a relationship. It is possible that the two tests used to assess reading skills in this study were not sufficient or fine-grained enough to observe a link between auditory perception and reading in children with NH, or that such a relationship is stronger for children with dyslexia. Alternatively, it is possible that reading abilities are not directly related to the E and TFS tasks used here or that other mechanisms mediate this relationship (Rosen, 2003). Last, it may be that the children in the current study were too old for such a relationship to be observed, which may well be expected to lessen as children get older and the reciprocal relationship between spoken language and reading acquisition takes hold (Ricketts et al., 2020). Whatever the reason, it is of interest that the children with MMHL in the current study showed both normal E processing and, generally, normal reading abilities. Therefore, it appears that for children with MMHL at least, sensitivity to TFS may better relate to spoken language development than it does to learning to read (see also Halliday and Bishop, 2005, for similar results regarding a lack of relationship between FD and reading for children with MMHL).

The current study had a number of limitations that should be considered. First, although the auditory tasks were designed to be predominantly reliant upon sensitivity to TFS and E cues (Halliday et al., 2019), it remains possible that other auditory processes were involved. For instance, for the TFS tasks, it is difficult to rule out the possible impact of reduced frequency selectivity due to broader auditory filters in the hearing loss group (Oxenham et al., 2009). It is therefore possible that the findings reflect an added effect of both TFS and frequency selectivity on language outcomes in children with sensorineural hearing loss. Second, owing to equipment failure, it was not possible to measure hearing aid fit or aided audibility for the children with MMHL. It is therefore possible that the hearing aids of the hearing loss group were not optimally fitted or were not functioning optimally on the day of testing, and so did not provide sufficient auditory input during the aided tasks. Hence, further research is needed to investigate the role of aided audibility on the abilities of children with sensorineural hearing loss who wear hearing aids to process the auditory temporal modulations of speech. Third, the present study included a single sample of children with MMHL. Future research is needed to replicate these findings. Finally, the current study employed a cross-sectional design, which limits the ability to infer causal relationships between auditory perception and language outcomes. Longitudinal designs are needed to investigate the causal direction of the relationship between auditory perception and language in children with sensorineural hearing loss.

Children with MMHL present with deficits in the processing of the fastest temporal modulations of sounds, the TFS, and show generally poorer language outcomes than their NH peers. The present study indicated that the auditory processing of temporal modulations may play a role in the spoken language development of children with MMHL and also those with NH. We found that unaided sensitivity to the TFS of sounds contributed to variance in the spoken language abilities of children with MMHL, and that measures of TFS sensitivity were more related to spoken language than pure-tone audiometry in this group. When children with MMHL used their hearing aids for the auditory tasks, aided sensitivity to either the TFS or E of sounds (but not both) contributed to the spoken language variability of the same group of children. Finally, for children with NH, sensitivity to E cues (but not TFS) was a better predictor of spoken language abilities. We suggest that the poorer spoken language abilities of children with sensorineural hearing loss may, in part, be a consequence of their reduced sensitivity to TFS, which may lead to poorer speech perception, particularly in noise. In contrast, for children with NH or those with hearing loss who are wearing their hearing aids, sensitivity to E cues may play a more important role. Thus, children with sensorineural hearing loss who show greater deficits in TFS perception may be at greater risk of spoken language difficulties than those with better TFS perception. TFS sensitivity may therefore be a useful measure to investigate individual variability in spoken language outcomes for children with sensorineural hearing loss. Further research is needed to better understand the potential role of aided audibility in mediating this relationship.

The authors would like to thank Stuart Rosen for constructing the stimuli and assisting with extracting psychophysical thresholds, Steve Nevard for his assistance with setting up the laboratory facilities, Michael Coleman for the development of the psychophysical testing software, Páraic Scanlon and Outi Tuomainen for participant testing, and Axelle Calcus for help with Fig. 1. The authors are especially grateful to all the children who participated, along with their parents, as well as the Local Educational Authorities and schools who assisted with recruitment. This work was supported by an Economic and Social Research Council (ESPC) First Grants Award (Grant No. RES-061-25-0440) and Medical Research Council (MRC) Senior Fellowship in Hearing Research (Grant No. MR/S002464/1) to L.F.H. and the FP7 people programme (Marie Curie Actions) Grant No. FP7-607139 (improving Children's Auditory REhabilitation, iCARE). L.C. was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant No. 659204 and is now supported by the Agence National de la Recherche (ANR), France, under Grant No. ANR-17-CE28-0008 DESIN. The data that support the findings of this study, as well as R codes used for the analyses, are available from L.H. via email (lorna.halliday@mrc-cbu.cam.ac.uk) upon reasonable request.

1

See supplementary material at https://www.scitation.org/doi/suppl/10.1121/10.0002669 for Table IV, summarizing model comparisons, and Figs. 5–7, representing the effect of each independent variable on spoken language scores for the best models.

1.
Ardoint
,
M.
, and
Lorenzi
,
C.
(
2010
). “
Effects of lowpass and highpass filtering on the intelligibility of speech based on temporal fine structure or envelope cues
,”
Hear. Res.
260
,
89
95
.
2.
Bates
,
D.
,
Maechler
,
M.
,
Bolker
,
B.
,
Walker
,
S.
,
Christensen
,
R. H. B.
,
Singmann
,
H.
, and
Scheipl
,
F.
(
2012
). “
Package ‘lme4’
,” CRAN. R Foundation for Statistical Computing, Vienna, Austria.
3.
Bishop
,
D. V.
(
2003
). “
Test for reception of grammar: TROG-2 version 2
,” Pearson Assessment.
4.
Blamey
,
P. J.
,
Sarant
,
J. Z.
,
Paatsch
,
L. E.
,
Barry
,
J. G.
,
Bow
,
C. P.
,
Wales
,
R. J.
,
Wright
,
M.
,
Psarros
,
C.
,
Rattigan
,
K.
, and
Tooher
,
R.
(
2001
). “
Relationships among speech perception, production, language, hearing loss, and age in children with impaired hearing
,”
J. Speech Lang. Hear. Res.
44
(
2
),
264
285
.
5.
British Society of Audiology
(
2011
). “
Recommended procedure: Pure-tone air-conduction and bone-conduction threshold audiometry with and without masking
,”
Read. Br. Soc. Audiol.
6.
Buss
,
E.
,
Hall
,
J. W.
 III
, and
Grose
,
J. H.
(
2004
). “
Temporal fine-structure cues to speech and pure tone modulation in observers with sensorineural hearing loss
,”
Ear Hear.
25
,
242
250
.
7.
Ching
,
T. Y.
,
Leigh
,
G.
, and
Dillon
,
H.
(
2013
). “
Introduction to the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study: Background, design, sample characteristics
,”
Int. J. Audiol.
52
,
S4
S9
.
8.
Corriveau
,
K.
,
Pasquini
,
E.
, and
Goswami
,
U.
(
2007
). “
Basic auditory processing skills and specific language impairment: A new look at an old hypothesis
,”
J. Speech Lang. Hear. Res.
50
(
3
),
647
666
.
9.
Corriveau
,
K. H.
, and
Goswami
,
U.
(
2009
). “
Rhythmic motor entrainment in children with speech and language impairments: Tapping to the beat
,”
Cortex
45
,
119
130
.
10.
Davidson
,
L. S.
,
Geers
,
A. E.
,
Blamey
,
P. J.
,
Tobey
,
E.
, and
Brenner
,
C.
(
2011
). “
Factors contributing to speech perception scores in long-term pediatric CI users
,”
Ear Hear.
32
,
19S
26S
.
11.
Davies-Venn
,
E.
,
Nelson
,
P.
, and
Souza
,
P.
(
2015
). “
Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing
,”
J. Acoust. Soc. Am.
138
,
492
503
.
12.
De Vos
,
A.
,
Vanvooren
,
S.
,
Vanderauwera
,
J.
,
Ghesquière
,
P.
, and
Wouters
,
J.
(
2017
). “
Atypical neural synchronization to speech envelope modulations in dyslexia
,”
Brain Lang.
164
,
106
117
.
13.
Dockrell
,
J. E.
, and
Shield
,
B. M.
(
2006
). “
Acoustical barriers in classrooms: The impact of noise on performance in the classroom
,”
Br. Educ. Res. J.
32
,
509
525
.
14.
Dreschler
,
W. A.
, and
Plomp
,
R.
(
1985
). “
Relations between psychophysical data and speech perception for hearing-impaired subjects. II
,”
J. Acoust. Soc. Am.
78
,
1261
1270
.
15.
Drullman
,
R.
(
1995
). “
Temporal envelope and fine structure cues for speech intelligibility
,”
J. Acoust. Soc. Am.
97
,
585
592
.
16.
Dunn
,
L. M.
,
Dunn
,
L. M.
,
Styles
,
B.
, and
Sewell
,
J.
(
2009
).
British Picture Vocabulary Scale–Third Edition (BPVS-III)
(
GL Assessment Limited
,
London
).
17.
Gilbertson
,
M.
, and
Kamhi
,
A. G.
(
1995
). “
Novel word learning in children with hearing impairment
,”
J. Speech Lang. Hear. Res.
38
,
630
642
.
18.
Goldsworthy
,
R. L.
, and
Markle
,
K. L.
(
2019
). “
Pediatric hearing loss and speech recognition in quiet and in different types of background noise
,”
J. Speech Lang. Hear. Res.
62
,
758
767
.
19.
Goswami
,
U.
(
2011
). “
A temporal sampling framework for developmental dyslexia
,”
Trends Cogn. Sci.
15
,
3
10
.
20.
Goswami
,
U.
(
2019
). “
Speech rhythm and language acquisition: An amplitude modulation phase hierarchy perspective
,”
Ann. N. Y. Acad. Sci.
1453
,
67
78
.
21.
Goswami
,
U.
,
Thomson
,
J.
,
Richardson
,
U.
,
Stainthorp
,
R.
,
Hughes
,
D.
,
Rosen
,
S.
, and
Scott
,
S. K.
(
2002
). “
Amplitude envelope onsets and developmental dyslexia: A new hypothesis
,”
Proc. Natl. Acad. Sci. U.S.A.
99
,
10911
10916
.
22.
Halliday
,
L.
,
Rosen
,
S.
,
Tuomainen
,
O.
, and
Calcus
,
A.
(
2019
). “
Impaired sensitivity to temporal fine structure but not the envelope for children with mild-to-moderate sensorineural hearing loss
,”
J. Acoust. Soc. Am.
146
,
4299
4314
.
23.
Halliday
,
L. F.
, and
Bishop
,
D. V. M.
(
2005
). “
Frequency discrimination and literacy skills in children with mild to moderate sensorineural hearing loss
,”
J. Speech Lang. Hear. Res.
48
(
5
),
1187
1203
.
24.
Halliday
,
L. F.
, and
Bishop
,
D. V. M.
(
2006
). “
Is poor frequency modulation detection linked to literacy problems? A comparison of specific reading disability and mild to moderate sensorineural hearing loss
,”
Brain Lang.
97
,
200
213
.
25.
Halliday
,
L. F.
,
Tuomainen
,
O.
, and
Rosen
,
S.
(
2017a
). “
Language development and impairment in children with mild to moderate sensorineural hearing loss
,”
J. Speech Lang. Hear. Res.
60
,
1551
1567
.
26.
Halliday
,
L. F.
,
Tuomainen
,
O.
, and
Rosen
,
S.
(
2017b
). “
Auditory processing deficits are sometimes necessary and sometimes sufficient for language difficulties in children: Evidence from mild to moderate sensorineural hearing loss
,”
Cognition
166
,
139
151
.
27.
Hämäläinen
,
J. A.
,
Leppänen
,
P. H. T.
,
Guttorm
,
T. K.
, and
Lyytinen
,
H.
(
2008
). “
Event-related potentials to pitch and rise time change in children with reading disabilities and typically reading children
,”
Clin. Neurophysiol.
119
,
100
115
.
28.
Henry
,
B. A.
,
Turner
,
C. W.
, and
Behrens
,
A.
(
2005
). “
Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners
,”
J. Acoust. Soc. Am.
118
,
1111
1121
.
29.
Henry
,
K. S.
, and
Heinz
,
M. G.
(
2013
). “
Effects of sensorineural hearing loss on temporal coding of narrowband and broadband signals in the auditory periphery
,”
Hear. Res.
303
,
39
47
.
30.
Hopkins
,
K.
, and
Moore
,
B. C.
(
2010
). “
The importance of temporal fine structure information in speech at different spectral regions for normal-hearing and hearing-impaired subjects
,”
J. Acoust. Soc. Am.
127
,
1595
1608
.
31.
Hopkins
,
K.
, and
Moore
,
B. C. J.
(
2011
). “
The effects of age and cochlear hearing loss on temporal fine structure sensitivity, frequency selectivity, and speech reception in noise
,”
J. Acoust. Soc. Am.
130
,
334
349
.
32.
Hopkins
,
K.
,
Moore
,
B. C.
, and
Stone
,
M. A.
(
2008
). “
Effects of moderate cochlear hearing loss on the ability to benefit from temporal fine structure information in speech
,”
J. Acoust. Soc. Am.
123
,
1140
1153
.
33.
Johannesen
,
P. T.
,
Pérez-González
,
P.
,
Kalluri
,
S.
,
Blanco
,
J. L.
, and
Lopez-Poveda
,
E. A.
(
2016
). “
The influence of cochlear mechanical dysfunction, temporal processing deficits, and age on the intelligibility of audible speech in noise for hearing-impaired listeners
,”
Trends Hear.
20
,
2331216516641055
.
34.
Kalashnikova
,
M.
,
Goswami
,
U.
, and
Burnham
,
D.
(
2019
). “
Sensitivity to amplitude envelope rise time in infancy and vocabulary development at 3 years: A significant relationship
,”
Dev. Sci.
22
(
6
),
e12836
.
35.
Klein
,
K. E.
,
Walker
,
E. A.
,
Kirby
,
B.
, and
McCreery
,
R. W.
(
2017
). “
Vocabulary facilitates speech perception in children with hearing aids
,”
J. Speech Lang. Hear. Res.
60
,
2281
2296
.
36.
Korkman
,
M.
,
Kirk
,
U.
, and
Kemp
,
S.
(
1998
).
A Developmental NEuroPSYchological Assessment (NEPSY)
(
Psychol. Corp
.,
New York
).
37.
Levitt
,
H.
(
1971
). “
Transformed up-down methods in psychoacoustics
,”
J. Acoust. Soc. Am.
49
,
467
477
.
38.
Lorenzi
,
C.
,
Gilbert
,
G.
,
Carn
,
H.
,
Garnier
,
S.
, and
Moore
,
B. C. J.
(
2006
). “
Speech perception problems of the hearing impaired reflect inability to use temporal fine structure
,”
Proc. Natl. Acad. Sci. U.S.A.
103
,
18866
18869
.
39.
McCreery
,
R. W.
,
Walker
,
E.
,
Spratford
,
M.
,
Lewis
,
D.
, and
Brennan
,
M.
(
2019
). “
Auditory, cognitive, and linguistic factors predict speech recognition in adverse listening conditions for children with hearing loss
,”
Front. Neurosci.
13
,
1093-1
1093-12
.
40.
McCreery
,
R. W.
,
Walker
,
E. A.
,
Spratford
,
M.
,
Oleson
,
J.
,
Bentler
,
R.
,
Holte
,
L.
, and
Roush
,
P.
(
2015
). “
Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing
,”
Ear Hear.
36
,
60S
75S
.
41.
Mehraei
,
G.
,
Gallun
,
F. J.
,
Leek
,
M. R.
, and
Bernstein
,
J. G.
(
2014
). “
Spectrotemporal modulation sensitivity for hearing-impaired listeners: Dependence on carrier center frequency and the relationship to speech intelligibility
,”
J. Acoust. Soc. Am.
136
,
301
316
.
42.
Moore
,
B. C.
(
2014
).
Auditory Processing of Temporal Fine Structure: Effects of Age and Hearing Loss
(
World Scientific
,
Singapore
).
43.
Moore
,
B. C.
, and
Ernst
,
S. M.
(
2012
). “
Frequency difference limens at high frequencies: Evidence for a transition from a temporal to a place code
,”
J. Acoust. Soc. Am.
132
,
1542
1547
.
44.
Moore
,
B. C.
, and
Gockel
,
H. E.
(
2011
). “
Resolvability of components in complex tones and implications for theories of pitch perception
,”
Hear. Res.
276
,
88
97
.
45.
Moore
,
B. C. J.
(
2007
).
Cochlear Hearing Loss: Physiological, Psychological and Technical Issues
(
Chichester
,
United Kingdom
).
46.
Oxenham
,
A. J.
,
Micheyl
,
C.
, and
Keebler
,
M. V.
(
2009
). “
Can temporal fine structure represent the fundamental frequency of unresolved harmonics?
,”
J. Acoust. Soc. Am.
125
,
2189
2199
.
47.
Papakonstantinou
,
A.
,
Strelcyk
,
O.
, and
Dau
,
T.
(
2011
). “
Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise
,”
Hear. Res.
280
,
30
37
.
48.
Peters
,
R. W.
, and
Moore
,
B. C. J.
(
1992
). “
Auditory filter shapes at low center frequencies in young and elderly hearing-impaired subjects
,”
J. Acoust. Soc. Am.
91
,
256
266
.
49.
Poeppel
,
D.
,
Idsardi
,
W. J.
, and
van Wassenhove
,
V.
(
2008
). “
Speech perception at the interface of neurobiology and linguistics
,”
Philos. Trans. R. Soc. Lond. B. Biol. Sci.
363
,
1071
1086
.
50.
Power
,
A. J.
,
Colling
,
L. J.
,
Mead
,
N.
,
Barnes
,
L.
, and
Goswami
,
U.
(
2016
). “
Neural encoding of the speech envelope by children with developmental dyslexia
,”
Brain Lang.
160
,
1
10
.
51.
R Core Team
(
2019
). “
R: A language and environment for statistical computing
,” R Found. Stat. Comput., available at https://www.R-project.org/ (Last viewed 29 October 2020).
52.
Rance
,
G.
,
McKay
,
C.
, and
Grayden
,
D.
(
2004
). “
Perceptual characterization of children with auditory neuropathy
,”
Ear Hear.
25
,
34
46
.
53.
Ricketts
,
J.
,
Lervåg
,
A.
,
Dawson
,
N.
,
Taylor
,
L. A.
, and
Hulme
,
C.
(
2020
). “
Reading and oral vocabulary development in early adolescence
,”
Sci. Stud. Read.
24
,
380
396
.
54.
Rosen
,
S.
(
1992
). “
Temporal information in speech: Acoustic, auditory and linguistic aspects
,”
Philos. Trans. R. Soc. Lond. B. Biol. Sci.
336
,
367
373
.
55.
Rosen
,
S.
(
2003
). “
Auditory processing in dyslexia and specific language impairment: Is there a deficit? What is its nature? Does it explain anything?
,”
J. Phon.
31
,
509
527
.
56.
RStudio Team
(
2019
). “
RStudio: Integrated Development Environment for R
,” RStudio Inc., available at http://www.rstudio.com/ (Last viewed 29 October 2020).
57.
Semel
,
E.
,
Wiig
,
E. H.
, and
Secord
,
W. A.
(
2006
).
Clinical Evaluation of Language Fundamentals
, 4th ed. (
UK Standardisation (CELF-4 UK) Psychol. Corp
.,
San Antonio TX
; Harcourt Assess. Co., Google Scholar).
58.
Shannon
,
R. V.
,
Zeng
,
F. G.
,
Kamath
,
V.
,
Wygonski
,
J.
, and
Ekelid
,
M.
(
1995
). “
Speech recognition with primarily temporal cues
,”
Science
270
,
303
304
.
59.
Smith
,
Z. M.
,
Delgutte
,
B.
, and
Oxenham
,
A. J.
(
2002
). “
Chimaeric sounds reveal dichotomies in auditory perception
,”
Nature
416
,
87
90
.
60.
Summers
,
V.
,
Makashay
,
M. J.
,
Theodoroff
,
S. M.
, and
Leek
,
M. R.
(
2013
). “
Suprathreshold auditory processing and speech perception in noise: Hearing-impaired and normal-hearing listeners
,”
J. Am. Acad. Audiol.
24
,
274
292
.
61.
Swaminathan
,
J.
, and
Heinz
,
M. G.
(
2012
). “
Psychophysiological analyses demonstrate the importance of neural envelope coding for speech perception in noise
,”
J. Neurosci.
32
,
1747
1756
.
62.
Ter Keurs
,
M.
,
Festen
,
J. M.
, and
Plomp
,
R.
(
1993
). “
Limited resolution of spectral contrast and hearing loss for speech in noise
,”
J. Acoust. Soc. Am.
94
,
1307
1314
.
63.
Tomblin
,
J. B.
,
Harrison
,
M.
,
Ambrose
,
S. E.
,
Walker
,
E. A.
,
Oleson
,
J. J.
, and
Moeller
,
M. P.
(
2015
). “
Language outcomes in young children with mild to severe hearing loss
,”
Ear Hear.
36
,
76S
91S
.
64.
Tomblin
,
J. B.
,
Oleson
,
J.
,
Ambrose
,
S. E.
,
Walker
,
E. A.
,
McCreery
,
R. W.
, and
Moeller
,
M. P.
(
2020
). “
Aided hearing moderates the academic outcomes of children with mild to severe hearing loss
,”
Ear Hear.
41
,
775
789
.
65.
Tomblin
,
J. B.
,
Oleson
,
J. J.
,
Ambrose
,
S. E.
,
Walker
,
E.
, and
Moeller
,
M. P.
(
2014
). “
The influence of hearing aids on the speech and language development of children with hearing loss
,”
JAMA Otolaryngol. Neck Surg.
140
,
403
409
.
66.
Tomblin
,
J. B.
,
Spencer
,
L.
,
Flock
,
S.
,
Tyler
,
R.
, and
Gantz
,
B.
(
1999
). “
A comparison of language achievement in children with cochlear implants and children using hearing aids
,”
J. Speech Lang. Hear. Res.
42
,
497
511
.
67.
Tsao
,
F.-M.
,
Liu
,
H.-M.
, and
Kuhl
,
P. K.
(
2004
). “
Speech perception in infancy predicts language development in the second year of life: A longitudinal study
,”
Child Dev.
75
,
1067
1084
.
68.
Wake
,
M.
,
Hughes
,
E. K.
,
Collins
,
C. M.
, and
Poulakis
,
Z.
(
2004
). “
Parent-reported health-related quality of life in children with congenital hearing loss: A population study
,”
Ambul. Pediatr.
4
,
411
417
.
69.
Wake
,
M.
,
Poulakis
,
Z.
,
Hughes
,
E. K.
,
Carey-Sargeant
,
C.
, and
Rickards
,
F. W.
(
2005
). “
Hearing impairment: A population study of age at diagnosis, severity, and language outcomes at 7–8 years
,”
Arch. Dis. Child.
90
,
238
244
.
70.
Walker
,
E.
,
Sapp
,
C.
,
Oleson
,
J.
, and
McCreery
,
R. W.
(
2019
). “
Longitudinal speech recognition in noise in children: Effects of hearing status and vocabulary
,”
Front. Psychol.
10
,
2421-1
2421-12
.
71.
Walker
,
E. A.
,
Sapp
,
C.
,
Dallapiazza
,
M.
,
Spratford
,
M.
,
McCreery
,
R. W.
, and
Oleson
,
J. J.
(
2020
). “
Language and reading outcomes in fourth-grade children with mild hearing loss compared to age-matched hearing peers
,”
Lang. Speech Hear. Serv. Sch.
51
,
17
28
.
72.
Wallaert
,
N.
,
Moore
,
B. C.
,
Ewert
,
S. D.
, and
Lorenzi
,
C.
(
2017
). “
Sensorineural hearing loss enhances auditory sensitivity and temporal integration for amplitude modulation
,”
J. Acoust. Soc. Am.
141
,
971
980
.
73.
Wechsler
,
D.
(
1999
).
Abbreviated Scale of Intelligence
(
Psychol. Corp
.,
San Antonio TX
).
74.
Wechsler
,
D.
(
2005
).
Wechsler Individual Achievement Test (WIAT-II UK)
(
Pearson Assessment
,
Harcourt Assessment, London, UK
).
75.
Wickham
,
H.
,
Chang
,
W.
,
Henry
,
L.
,
Pedersen
,
T. L.
,
Takahashi
,
K.
,
Wilke
,
C.
,
Woo
,
K.
,
Yutani
,
H.
,
Dunnington
,
D.
, and
RStudio
(
2019
). “
ggplot2: Create elegant data visualisations using the grammar of graphics
,” R package version 3.2.1, available at https://cran.r-project.org/web/packages/ggplot2/index.html (Last viewed 22 October 2020).
76.
Wier
,
C. C.
,
Jesteadt
,
W.
, and
Green
,
D. M.
(
1977
). “
Frequency discrimination as a function of frequency and sensation level
,”
J. Acoust. Soc. Am.
61
,
178
184
.
77.
Xu
,
Y.
,
Chen
,
M.
,
LaFaire
,
P.
,
Tan
,
X.
, and
Richter
,
C.-P.
(
2017
). “
Distorting temporal fine structure by phase shifting and its effects on speech intelligibility and neural phase locking
,”
Sci. Rep.
7
,
1
9
.
78.
Yoshinaga-Itano
,
C.
,
Sedey
,
A. L.
,
Coulter
,
D. K.
, and
Mehl
,
A. L.
(
1998
). “
Language of early- and later-identified children with hearing loss
,”
Pediatrics
102
,
1161
1171
.
79.
Zeng
,
F.-G.
,
Nie
,
K.
,
Liu
,
S.
,
Stickney
,
G.
,
Del Rio
,
E.
,
Kong
,
Y.-Y.
, and
Chen
,
H.
(
2004
). “
On the dichotomy in auditory perception between temporal envelope and fine structure cues
,”
J. Acoust. Soc. Am.
116
,
1351
1354
.
80.
Ziegler
,
J. C.
,
Pech-Georgel
,
C.
,
George
,
F.
,
Alario
,
F. X.
, and
Lorenzi
,
C.
(
2005
). “
Deficits in speech perception predict language learning impairment
,”
Proc. Natl. Acad. Sci. U.S.A.
102
,
14110
14115
.

Supplementary Material