Until recently, most hearing conservation programs, including those in the military, have used permanent shifts in the pure-tone audiometric threshold as the gold standard for measuring hearing impairment in noise-exposed populations. However, recent results from animal studies suggest that high-level noise exposures can cause the permanent destruction of synapses between the inner hair cells and auditory nerve fibers, even in cases where pure-tone audiometric thresholds eventually return to their normal pre-exposure baselines. This has created a dilemma for researchers, who are now increasingly interested in studying the long-term effects that temporary hearing shifts might have on hearing function, but are also concerned about the ethical considerations of exposing human listeners to high levels of noise for research purposes. One method that remains viable to study the effects of high noise exposures on human listeners, or to evaluate the efficacy of interventions designed to prevent noise-related inner ear damage, is to identify individuals in occupations with unavoidable noise exposures and measure hearing before and as soon as possible after exposure. This paper discusses some of the important factors to be considered in studies that attempt to measure acute hearing changes in noise-exposed military populations.
I. INTRODUCTION
Noise-induced hearing loss is a significant public health concern in all modern industrialized societies, but there are few occupations where the risks of hazardous noise exposure are more numerous and present an increased likelihood of resulting injury than those commonly found in military service. Indeed, noise-induced hearing loss and tinnitus continue to be the most prevalent service-connected disabilities experienced by veterans of military service in the United States (U.S. Department of Veterans Affairs, 2018). As has been the case in other areas of hearing conservation, the gold standard measure for evaluating noise-induced hearing damage in the military has historically been to perform an annual air conduction audiogram to identify and track permanent threshold shifts (PTSs) caused by repeated or extreme exposure to noise. Individuals with moderate to severe PTSs generally have difficulty understanding quiet speech, and they may require hearing aids to partially restore their hearing ability in everyday listening environments. PTSs are also commonly associated with increased difficulty understanding speech in noisy environments; even when these shifts are mild to moderate, they can have a negative impact on the operational effectiveness of military personnel (Sheffield , 2016). When PTSs become more severe, they can trigger an auditory fitness-for-duty examination that may potentially lead to reassignment or medical retirement.
Current military hearing conservation programs are not well structured to track temporary threshold shifts (TTSs), which are short-term changes in hearing thresholds that occur immediately after exposure to continuous or impulse noise and recover over a time period of minutes, hours, or, in the most extreme cases, days (Ryan , 2016). The Department of Defense (DoD) defines a significant threshold shift (STS) as a change in hearing threshold relative to the current reference audiogram of an average of 10 dB or more at frequencies 2000, 3000, and 4000 Hz combined, in either ear, measured by pure-tone air conduction audiometry. Current military regulations require service members whose audiograms demonstrate an STS to be re-tested after 14 noise-free hours to determine if the measured threshold shift is temporary (TTS) or permanent (PTS; Department of Defense, 2010; U.S. Department of the Army, 2015; U.S. Air Force, 2016). Hearing thresholds that recover to normal are documented but are not officially considered to be negative events in an individual's hearing history. Because TTSs typically recover fairly quickly, the sparsely sampled nature of the annual audiogram makes it unlikely that TTSs will be detected in individuals who experience them infrequently. Over the course of their career, service members might experience numerous TTSs that are never detected. As a result, there may be no opportunity to identify the responsible hazard(s), track TTS pattern or severity, counsel or assist service members with better protection strategies, or apply therapeutic measures within the theorized window of opportunity that closes when a temporary change in hearing function becomes permanent (Campbell and Hammill, 2018). Note that the TTS, STS, and PTS metrics used in current military hearing conservation tests only identify changes in pure-tone thresholds that are 30 dB (average of 10 dB) or greater in magnitude at three frequencies combined (2 kHz, 3 kHz, 4 kHz) in either ear. This is an unfortunate limitation because hearing assessment technology is also capable of monitoring and measuring smaller increments of pure-tone threshold change. Other audiologic tests, such as otoacoustic emissions (OAEs) and speech-in-noise tests, could also be used to examine effects of noise and blast exposure.
Although logic dictates that there is some relationship between TTS and PTS, the link is not well understood. TTS appears to be associated with edema, metabolic fatigue, and biochemical reactions, while PTS follows from structural damage to the hair cells or even complete loss of outer hair cells, detachment of portions of the organ of corti, and other mechanical damage (Clifford and Rogers, 2009; St Onge , 2011; Cho , 2013; Okpala, 2011). TTS has been studied fairly extensively in human listeners, at least in part because TTS studies can presumably be conducted safely and prospectively in a limited time period under controlled laboratory conditions. By contrast, PTS studies are limited to large-scale observational longitudinal studies where noise exposures are those found in uncontrolled occupational environments. Major reasons for studying TTS include:
-
TTS may be a proxy for predicting the probability of PTS. Two of the basic assumptions used by Kryter (1966) to develop the original National Research Council Committee on Hearing, Bioacoustics, and Biomechanics (CHABA) damage risk criteria for noise were that (a) all exposures that produced the same TTS at 2 min postexposure (TTS2) are equally hazardous, and (b) the PTS produced by 10 yr of daily exposure to the same pattern of noise is approximately equal to the TTS2 produced after an 8-h exposure (Melnick, 1991). More recent studies have shown that the TTS experienced by an individual listener for a given noise exposure is not a reliable predictor of that individual's susceptibility to TTS (Ward, 1973), but that groups of listeners who are, on average, more susceptible to TTS may also be more susceptible to PTS. Until recently, this relationship between TTS and PTS within groups of listeners has been used to justify the use of TTS studies to assess noise risk criteria, particularly for impulse noise (Melnick, 1991; Chan , 2016). TTS studies have also been used to evaluate possible drug interventions in human populations on the premise that a drug agent that reduces TTS might also have an effect on PTS (Kil , 2017).
-
TTS is an operational risk in military environments. In industrial or recreational environments, it is usually possible to manage a TTS by removing the individual from the noisy environment until hearing sensitivity recovers. However, in military environments, a TTS often occurs at the beginning of a military engagement, for example, when a service member experiences enemy fire (e.g., mortar, mine, rocket, etc.) or first starts firing a weapon. This can interfere with communication and situational awareness during the combat engagement. Removal from continuing noise hazards is often not immediately possible, if at all. This situation potentially exacerbates the noise insult and prevents timely assessment.
-
TTS may be an indicator of cochlear synaptopathy. Historically, TTSs that fully returned to the pre-noise-exposure baseline were assumed to have no permanent impact on long-term hearing ability in human listeners. However, laboratory results in multiple animal species have shown that permanent loss of cochlear synapses can occur as a result of exposure to continuous (Kujawa and Liberman, 2009) and impulsive (Hickman , 2018) noise that induces TTS (for recent reviews, see Tepe , 2017; Le Prell 2018). Subsequent studies of synaptopathy-related hearing performance deficits in civilian subjects have had mixed results (Liberman , 2016; Grinn , 2017; Guest , 2018). However, many studies have identified functional hearing deficits in blast-exposed service members and veterans with normal audiometric thresholds (Gallun , 2012; Kubli , 2018). This could be the result of some form of synaptopathy specific to military noise exposure such as blast; relatively few civilians are exposed to similar blasts, e.g., related to terrorist attacks (Remenschneider , 2014).
Although no definitive link has yet been established between TTS in humans and long-term reductions in functional hearing ability as predicted by findings of cochlear synaptopathy in animal models, the potential for such a link has had a chilling effect on human studies involving exposure to high-level noise. Indeed, the possible existence of TTS-related cochlear synaptopathy in humans has resulted in a dilemma for scientists who study hearing in humans. On the one hand, it has greatly increased interest in the study of noise-related changes in human hearing. On the other hand, it has greatly increased the caution that scientists and institutional review boards (IRBs) must exercise when they evaluate the potential risks of such research.
As a consequence of this dilemma, one of the few ethical pathways remaining for the study of high-level noise effects in human subjects is to identify subpopulations that are unavoidably exposed by occupation or choice (recreational activity), and assess for changes as soon as possible after unavoidable exposure. This can be particularly challenging when the hearing evaluation must be conducted without interfering in any way with occupational performance.
One obvious subpopulation of interest is military personnel who are unavoidably exposed to noise levels far beyond what would be tolerated in commercial or recreational settings. Military members can be exposed to hazardous noise (steady-state or blast/impulse) during their daily duties, training events, and deployments. Military aircraft and vehicles can produce continuous noise signatures as loud as 110–150 dB sound pressure level (SPL). In some military settings, such as aircraft carrier decks, it is impossible for ground crew members to move far enough away from hazardous noise sources to reach what would be considered a safe level of exposure (Yankaskas , 2017). Blasts from shoulder-fired missiles are estimated to reach a peak level of approximately 180 dB, comparable to blast noise emitted from some improvised explosive devices (IEDs; Wells , 2015; Rajguru, 2013). Many weapons, ground vehicles, ships, and aircraft produce hazardous noise by the energy released to meet their objectives (power, speed, lethality). When noise control cannot be applied without reducing mission or device effectiveness, hearing protection devices (HPDs) are the only line of defense for service members to protect themselves from noise-related hearing damage.
While it may be theoretically possible to use the noise-exposed military population for research to address high-level noise exposure in humans, it is not easy. There are many factors to be considered and many challenges to be faced. This paper discusses some of these considerations and challenges, and describes the methods we have developed for the Characterization of Acute or Short-term-acquired Military Population Auditory Shifts (CHASMPAS) study. The primary aim of this study is to use advanced boothless audiometry and advanced noise/blast measurement methods to characterize hearing performance and hearing acuity in military populations before and after exposure to measured noise hazards. The results of this study will (1) add to the knowledge base for the refinement of acoustic injury standards; (2) make recommendations about hearing protection strategies for at-risk populations; (3) identify populations most likely to obtain measurable benefit from enhanced prevention strategies, including the use of prophylactic pharmaceuticals or rescue agents; and (4) develop improved methods for monitoring small changes in the hearing of at-risk populations.
II. POPULATION SELECTION
The study of noise hazards within active military training environments is a complex endeavor even when human subjects are not involved. One first has to differentiate and select between types of noise. Many military specialties are exposed primarily to steady-state machinery noise, similar to what might be encountered in an industrial environment. These noise exposures are fairly easy to characterize through traditional noise dosimetry, although there is some evidence that attention should be given to the statistical properties of the noise (in particular, kurtosis), which might influence the hazard levels associated with different noises that have the same time-weighted average dBA levels (Davis , 2009). In other cases, the noise doses may be related to long-term exposures to repetitive impulse noise, as might be the case on a firing range where automatic weapons are used over an extended time period. In some of the most potentially dangerous cases, the noise dose might be dominated by a few discrete exposures to a high-impact weapon like a mortar or a shoulder-fired missile. One also has to recognize that many service members might be exposed to a mixture of all three kinds of noise, if one considers that some missions could involve deployment in a helicopter followed by an exchange of small-arms fire and then the use of a shoulder-fired weapon. Because different outcomes are expected for each type of noise (Bielefeld , 2007), CHASMPAS aims to characterize as wide a variety of noise environments as possible, moving from one to the next as unique, at-risk populations are identified, granted access, and approved for study by the DoD IRB.
Another complication inherent in characterizing the noise doses experienced by service members who are exposed as part of their routine training or other duties is accounting for the use of hearing protection. The use of hearing protection is generally required on training ranges with loud noise exposures, but compliance is rarely 100%, and it can be hard to verify unless the use of hearing protection is actually observed during the exposure period. The use of double-hearing protection can be even harder to verify as the use of plugs under earmuffs cannot be confirmed from visual inspection. Even in cases where hearing protection is properly fit, there are large variations in individual personal attenuation ratings that could introduce significant variations between the noise exposure measured by an external sound measurement device and the exposure actually experienced by the participant.
Of note, military population choices must be made in discussion with operational leaders. Access may be logistically impossible or denied to scientifically ideal populations. Study design and analytic planning may be further complicated in settings where it is impossible to isolate a single noise source or type, as is the case in many shipboard environments where multiple noise hazards (e.g., aircraft launches from 110 to 150 dB; Yankaskas , 2017) may be experienced in a single 24-h period by some, none, or all members of a study population. Further still, all cohort selections remain subject to changes in operational requirements. A study population may be identified, granted access, and logistically planned or underway only to be faced with unpredictable operational requirements that overtake research commitments and priorities; data collection may be interrupted or canceled with little or no warning. Additional considerations for noise hazard and therapeutic trial design are described in Campbell (2016).
III. HEARING PROTECTION
Military operations and equipment are generally subject to technical constraints that make it impossible to modify or reduce the noise dose experienced by service members without significant reduction in operational capability. HPDs may be the only line of defense against noise damage. HPDs are required in most noise-hazardous military environments, but regulations are not sufficient to prevent hearing loss. Two Finnish studies showed that even with changes to hearing protection regulations, there was less impact than expected on the incidence of hearing loss and tinnitus (Mrena , 2008; Mrena , 2009). In one civilian study, it was estimated that even when workers reported “always” wearing their hearing protection, their actual usage occurred only 33% of the time (Arezes and Miguel, 2005). Service members may feel that HPDs reduce their situational awareness and hamper their ability to detect danger (Dougherty , 2013; Killion , 2011). In a quiet environment, a simple application of the inverse square law suggests that a hearing protector with 20 dB of attenuation will increase the minimum detectable distance of an approaching sound source by a factor of 10, which could be a significant operation risk for a service member at a listening post or on a foot patrol.
To be effective, most HPDs depend on proper placement; they are not always used appropriately. Training on HPD usage can improve protection by 10 dB, but even with proper use of HPDs, noise exposure in combat situations can exceed current HPD protective capabilities (Rajguru, 2013). For example, the Combat Arms Earplugs (CAEPs) used by the U.S. Army have a mode specifically designed to protect against impulse noise; however, in this mode, the device has a noise reduction rating of 25 dB, which is inadequate to protect against an IED or other ordinance exposures upward of 180 dB. The pressure waves of a blast from an IED or military ordinance are strong enough that even with HPD use, some tympanic membrane perforations occur (Dougherty , 2013). Unfortunately, research in this area is scarce; more data are needed to precisely characterize the effects of high impulse/blast exposures (as well as steady-state noise exposures) on auditory function in the presence of HPDs.
Auditory researchers in the military community might be aware of alternative HPD strategies that could protect service members better than their existing equipment and/or techniques. However, it is also extremely important for service members to “train as they fight,” and access to military personnel is often contingent on researchers' agreement not to interfere with training or operational mission. Blatantly gross negligence and extreme policy violations that place service members' hearing at risk obviously should be addressed. However, in general, the only feasible strategy is to carefully document the hearing protection being used by each participant without attempting to interfere or modify operational procedures as part of the experiment. Also, it is worth noting that compliance with the use of hearing protection may be higher than normal when hearing researchers are present and performing measurements.
IV. NOISE AND BLAST MEASUREMENT
A key component of any study of noise exposure on humans is a high-fidelity measurement of the actual noise exposure experienced by the listener. Characterization of the noise hazard through dosimetry should be done with the highest possible level of fidelity given operational constraints. Smalt (2017) discussed the trade-off between ease of measurement and accuracy of the dose estimate, specifically in tactical and military environments. Though challenging to collect, in-ear measurements might provide the most accurate exposure estimate as they include the effects of hearing protection, as well as the distance from and orientation relative to the noise source. Conversely, free-field measurements with stationary microphones allow for easier setup and data collection but only characterize the noise environment generally (as opposed to subject-specific exposures).
As the preferred method of exposure monitoring, personal dosimetry should be employed whenever possible without disrupting operational tempo. Dosimeter selection will depend on compatibility with other personal protective equipment, expected noise levels, and types of noise. Figure 1 depicts SPLs along a continuum from continuous noise to blast noise, as well as the range of appropriate measurement devices. For environments that are dominated by continuous noise, subjects could be fitted with a commercial noise dosimeter, such as an Etymotic ER-200dw9 [Fig. 1(A); Etymotic Research Inc., Elk Grove Village, IL] or a Svantek SV 104 [Fig. 1(B); Svantek Sp. z o.o., Warsaw, Poland], which could be worn for the duration of the noise exposure each day as needed. These devices are capable of capturing readings multiple times per second, and have a dynamic range covering approximately 50–140 dB SPL. For environments containing impulse noise (e.g., rifle ranges), it is possible to make use of a portable audio recorder paired with an external, body-worn high SPL microphone and an in-ear microphone integrated with a hearing protector [Fig. 1(E); Davis , 2018]. If the in-ear component does not meet the hearing protection needs of the subjects in the study setting, an external microphone could be used alone instead to measure exposure near the ear. Subjects with anticipated exposure to blasts with peak levels above 175 dB can be outfitted with blast gauges [Fig. 1(F), Blast Gauge® Generation 7 or comparable model, BlackBox Biometrics, Inc., Rochester, NY] placed on the helmet, chest, and/or back. Multiple blast gauges are generally recommended because recorded exposure levels can vary significantly across different gauge placements.
(Color online) Noise dosimetry spectrum. Different types of equipment are used for various SPL measurements ranging from industrial/continuous noise (A),(B), to impulse noise (C)–(E), to blast (F),(G).
(Color online) Noise dosimetry spectrum. Different types of equipment are used for various SPL measurements ranging from industrial/continuous noise (A),(B), to impulse noise (C)–(E), to blast (F),(G).
If personal dosimeters cannot be fielded, stationary microphones can be used to characterize noise environments [Fig. 1(C)]. SPLs over the area of interest can be sampled by multiple, strategically located microphones and portable GPS units. Microphone specifications, e.g., bandwidth and dynamic range, will be driven by expected noise characteristics. Whenever possible, it is preferable to collect these spatial data, in addition to personal dosimetry, enabling potential generalizations to similar exposures where only traditional, routine industrial hygiene noise measurements have been collected.
The minimum requirement for noise exposure characterization can be met with previously recorded site-specific noise measurements, e.g., those conducted by local industrial hygiene personnel. For continuous noise, the dose can be estimated using the average SPL and the exposure time. For impulse noise, which is often generated by weapon systems, the dose can be computed using an exposure estimate from a recording of a single round and the number of rounds fired. In this case, the added exposure from other noise sources (e.g., a nearby weapon being fired) would not be included in the estimate, thereby reducing the reliability for dose-response curve estimation.
One example of a data collection in which multiple types of sensors (blast gauges and impulse noise recorders) were used is shown in Fig. 2 for the characterization of noise exposure from the M777 weapon system. In this particular data collection, on-body recordings were not possible, but approximate exposure levels could be extrapolated from the known position of the subject relative to the weapon system and the sensors.
(Color online) (Left) Recording of impulse noise from the M777 weapon system. Piezo-based blast gauges are used very near the source, while a 1/8 in. microphone is used further away. Measurements in this case are taken at multiple locations where individuals might be in the environment rather than on-body or as part of the hearing protection system. Photo, recording setup, and collecting courtesy of Josh Duckworth and Richard Bauman of Uniformed Services University of the Health Sciences (USUHS). (Right) An example of the M777 pressure waveform as recorded from the impulse noise recorder, where the peak pressure is 168.3 dB at a distance of nearly 20 ft [Creare wireless automated hearing test system (WAHTS)].
(Color online) (Left) Recording of impulse noise from the M777 weapon system. Piezo-based blast gauges are used very near the source, while a 1/8 in. microphone is used further away. Measurements in this case are taken at multiple locations where individuals might be in the environment rather than on-body or as part of the hearing protection system. Photo, recording setup, and collecting courtesy of Josh Duckworth and Richard Bauman of Uniformed Services University of the Health Sciences (USUHS). (Right) An example of the M777 pressure waveform as recorded from the impulse noise recorder, where the peak pressure is 168.3 dB at a distance of nearly 20 ft [Creare wireless automated hearing test system (WAHTS)].
V. TEMPORAL WINDOW FOR TESTING
One of the most critical issues for hearing protection in the military is the need for noise hazard criteria to identify the loudest noises to which service members can be exposed without incurring hearing damage. To develop such standards, it is necessary to gather pre- and postexposure data. This is a challenge because, except in the most extreme cases, short-term hearing changes that occur by exposure to blast or impulse noise may resolve quickly. By the time an exposed service member exits the training environment, removes his or her protective equipment, puts on a pair of audiometric headphones, and completes an auditory threshold test, any temporary change in hearing thresholds may have already resolved.
The short postexposure time window imposed by the rate of recovery in the TTS is a problem for any study of the hazardous effects of noise. However, there is evidence to suggest that it may be a particularly challenging problem for cases where the noise exposure is caused by blast or impulse noise rather than steady-state noise. Figure 3, which is adapted from Mills (1979) and Chan (2016), shows the growth and decay of TTSs in listeners exposed to continuous and impulsive (blast) noise. The continuous TTS curve (black line) shows that listeners who are exposed to continuous noise experience a TTS that grows exponentially up to an asymptotic threshold level (ATS, which is determined by the level of the noise exposure) and once the noise exposure ends, the TTS level decays exponentially until it reaches its pre-exposure level. By combining data from a number of prior studies, Mills was able to determine that the time constant for the onset of TTS was 7.1 h, and the time constant for the postexposure decay of TTS was 2.1 h. Thus, for a long-term continuous noise exposure resulting in a 20 dB TTS, one would expect the TTS to degrade to 19.6 dB in less than 1 min postexposure, 18.6 dB 30 min postexposure, and 8.6 dB 6 h postexposure.
(Color online) Exponential growth and decay of TTS to continuous noise (black line) and blast-related impulsive noise (blue, red, and yellow lines). Blast or impulsive TTS shows a much quicker initial decay than continuous noise. Continuous TTS decay model replotted from Mills (1979) and blast-decay model replotted from Chan et al. (2016).
(Color online) Exponential growth and decay of TTS to continuous noise (black line) and blast-related impulsive noise (blue, red, and yellow lines). Blast or impulsive TTS shows a much quicker initial decay than continuous noise. Continuous TTS decay model replotted from Mills (1979) and blast-decay model replotted from Chan et al. (2016).
The TTS decay rate developed by Chan (2016), based on human and animal data showing hearing recovery after exposure to impulse noise, shows several notable differences from Mills' continuous noise predictions. First, the rate of recovery for impulse noise was directly dependent on the amount of TTS. Larger post-impulse noise TTSs initially decay at a faster rate (slope in dB per minute), but ultimately take longer to resolve than smaller post-impulse noise TTSs. This illustrates that it is important to accurately track the time between initial exposure and threshold measurement. The second difference evident in Fig. 3 is that in the initial time period immediately following exposure to continuous versus impulse noise, rates of recovery differ. Chan's data suggest that a blast-related TTS that is 20 dB at 2 min postexposure will degrade to 15 dB after 10 min, to 12 dB after 30 min, and to less than 8 dB after 6 h. This rate of decay is 5–6 times faster than for continuous noise exposure.
From these data, one may see potentially significant experimental challenges in attempting to evaluate the magnitude of impulse- and blast-related TTS by using traditional experimental procedures that require the exposed participant to leave the field, remove protective equipment, and undergo traditional audiometric testing. If it is possible to get an impulse noise-exposed listener to exit the range and undergo a hearing test in no less than 30 min, one would expect impulse noise-related TTS to decay by almost 50% before testing. To reduce the impact of timing, impulse noise postexposure testing requires the use of boothless audiometric equipment that can be staged at or near the site of exposure.
VI. AUDIOMETRIC TESTING
Audiometric testing is the cornerstone of any study whose purpose is to measure the acute effects of noise exposure. However, traditional single- or double-walled audiometric testing booths are generally not available in austere locations where high-level military noise exposures occur. In order to more quickly capture changes in hearing sensitivity and hearing performance without removing service members from their training environments, the CHASMPAS study will use a tablet-based system that has been designed for audiometric data collections in field conditions (described in Sec. VII). This boothless audiometry approach will be used to administer audiologic tests and questionnaires pre-exposure (baseline) and postexposure. The goal is to conduct audiologic tests as close in time as possible before and after noise exposure in order to capture any temporary changes in hearing before TTS recovery. Finally, subjects will be re-tested 14 or more days later to identify any permanent changes that may have occurred. (The authors note that >30 days postexposure measures are preferable and will be sought but are not always feasible given post-training dispersion with weapon trainee groups.) Location of testing, time of pre-exposure testing, time spent at the noise hazard site, time exposed to noise, and time elapsed between pre-noise exposure testing and follow-up testing will also be documented.
Exposure to blast or other high SPL noise may be associated with symptoms of hearing loss not evident as measurable threshold shifts. For this reason, the literature supports use of an extended audiologic test battery with high-noise/blast-exposed persons (Remenschneider , 2014). In a study of Swiss Army soldiers exposed to acoustic trauma, extended high-frequency testing indicated that there were two frequency regions (3–6 kHz and 11–14 kHz) that were particularly vulnerable to noise exposure (Buchler , 2012). An expanded test battery may include extended high-frequency audiometry, OAEs, central auditory processing tests, and/or speech-in-noise tests (Karch , 2016; Remenschneider , 2014; de Souza Chelminski Barreto , 2011; Buchler , 2012). Any or all of these additional tests may be useful to better identify, forecast, characterize, and track noise-induced hearing damage associated with blast exposure.
To support an extended test battery, CHASMPAS will employ a variety of auditory tests (Table I). The amount of time allowed to test military populations depends upon the mission operation tempo of each unit; multiple test battery approaches must be available in order to capture meaningful data in the time allotted. For unit operations that allow only minutes of testing, a minimum test battery is proposed (indicated with an “X”). For unit operations that can accommodate more time in testing, a more comprehensive test battery will be administered. Ideally, OAEs and tympanometry will be included even in the minimum test battery as OAEs provide an objective site of lesion measurement, and tympanometric results support interpretation of OAE findings. Additionally, in cases of extreme noise/blast exposure, tympanometry may reveal changes in the middle ear system. Auditory tests will be performed one at a time, consecutively, but not necessarily in the order listed in Table I.
CHASMPAS audiometric testing batteries (“X” denotes minimal test battery).
Test name . | Platform/equipment . | Time per subject (min) . |
---|---|---|
X-Hughson Westlake pure tone—air conduction audiometry (4 kHz) | WAHTS boothless device plus tablet | 1 |
X-fixed-level frequency threshold (FLFT) | WAHTS boothless device plus tablet; | 1 |
X-triple-digit speech recognition test (quiet and noise) | WAHTS boothless device plus tablet; | 2 |
X-masking level difference (MLD)/NoSπ condition | WAHTS boothless device plus tablet; | 2 |
Tympanometry | Portable/handheld device | 2 |
Distortion-product optoacoustic emissions (DPOAEs) | Portable/handheld device | 2 |
Hughson Westlake pure tone—air conduction audiometry (500 Hz–16 kHz) | WAHTS boothless device plus tablet; | 15 |
QuickSIN | WAHTS boothless device plus tablet; | 2 |
Modified rhyme test (MRT) | WAHTS boothless device plus tablet; | 8 |
Oddball paradigm | WAHTS boothless device plus tablet; | 2 |
X-acute auditory changes questionnaire I or II (AACQ; only administer one of these per participant. I is administered before and after noise exposure, II is only administered after noise exposure) | Tablet | 2–3 |
Abbreviated spatial and speech qualities test (SSQ; may be administered more than once but not required) | Tablet | 2–3 |
Test name . | Platform/equipment . | Time per subject (min) . |
---|---|---|
X-Hughson Westlake pure tone—air conduction audiometry (4 kHz) | WAHTS boothless device plus tablet | 1 |
X-fixed-level frequency threshold (FLFT) | WAHTS boothless device plus tablet; | 1 |
X-triple-digit speech recognition test (quiet and noise) | WAHTS boothless device plus tablet; | 2 |
X-masking level difference (MLD)/NoSπ condition | WAHTS boothless device plus tablet; | 2 |
Tympanometry | Portable/handheld device | 2 |
Distortion-product optoacoustic emissions (DPOAEs) | Portable/handheld device | 2 |
Hughson Westlake pure tone—air conduction audiometry (500 Hz–16 kHz) | WAHTS boothless device plus tablet; | 15 |
QuickSIN | WAHTS boothless device plus tablet; | 2 |
Modified rhyme test (MRT) | WAHTS boothless device plus tablet; | 8 |
Oddball paradigm | WAHTS boothless device plus tablet; | 2 |
X-acute auditory changes questionnaire I or II (AACQ; only administer one of these per participant. I is administered before and after noise exposure, II is only administered after noise exposure) | Tablet | 2–3 |
Abbreviated spatial and speech qualities test (SSQ; may be administered more than once but not required) | Tablet | 2–3 |
VII. AUDIOMETRY EQUIPMENT
To capture in-field audiometry the study team will utilize the Creare wireless automated hearing test system (WAHTS, Hanover, NH; shown in Fig. 4), which consists of a noise attenuating headset with audiometric test capabilities and a tablet personal computer (PC) running the TabSINT software (an open source platform for administering tablet-based hearing-related exams), as well as general-purpose questionnaires. The tablet will be used for test data collection, including the use of self-administered surveys. The WAHTS was developed to increase access to hearing health care in settings not specifically designed for hearing testing (e.g. no sound booth, minimally trained test administrators). Its design objectives were to (1) maximize passive attenuation, while keeping the headset comfortable enough to wear for the duration of a typical hearing exam; (2) leverage mobile technologies and eliminate cables; and (3) meet ANSI S3.6 (American National Standards Institute, 2010) and IEC 60645-1 (International Electrotechnical Commission, 2012) standards for audiometers. Figure 4 depicts components contained within the ear cup and completed headset. The ear-cup shell is relatively large, stiff, and heavy; these attributes contribute to passive attenuation. The ear cup is lined with thick polyurethane foam to attenuate higher frequencies. Closer to the listener's ear, the right ear cup contains a wireless, audiometer circuit, while the left cup contains a rechargeable lithium ion battery. A speaker and a small microphone are mounted within a plastic face plate and covered with a thin protective fabric. Although not used in the present study, the microphone will enable measurement of the sound level inside the ear cups at the listener's ear. Finally, an ear seal from X-series hearing protectors (3 M, St. Paul, MN) snaps into the ear cup.
(Color online) Creare WAHTS (Michael and Associates, State College, PA).
Because audiometric headsets need to be fit quickly and easily, especially for screening applications, the headband uses a “frictionless fit” to enable quick adjustment of the ear cups over the listener's ears. Typical hearing protectors and audiometric headsets require forceful sliding of the ear cups up or down to align transducers with the listener's ear, and rely on the friction with the headband wires to hold their position. To enable a more intuitive adjustment, this new design minimizes friction between the headband wires and ear cups. As a result, the ear cups may be quickly and accurately positioned over the listener's ears, either by the test operator or the listener.
The wireless headsets support a Bluetooth low energy (4.0+) interface that allows a connected device to initiate an automated threshold test and receive the results. For this study, tablets will run TabSINT to communicate directly with the WAHTS through Bluetooth (Fig. 5). The application will guide study coordinators to enter the relevant study data [e.g., site and subject identification (ID), background noise levels recorded by the sound level meter]. Once the data entry portion is complete, the application will instruct the administrator to place the headset on the subject and hand the tablet to them. The application screen will then display a large response button graphic that responds to touch. Figure 5 shows an example of tablet-based auditory test administration. At the conclusion of each session, the audiogram displays and data are saved until they can be downloaded to the password-protected site study laptop at the end of the day.
(Color online) The audiometer is contained in the headset, which communicates with the tablet through Bluetooth. It is calibrated to applicable American National Standards Institute (ANSI) standards for audiometry and it provides attenuation similar to a single wall (mobile) sound booth.
(Color online) The audiometer is contained in the headset, which communicates with the tablet through Bluetooth. It is calibrated to applicable American National Standards Institute (ANSI) standards for audiometry and it provides attenuation similar to a single wall (mobile) sound booth.
VIII. RETROSPECTIVE AND PROSPECTIVE DATA
Even when a hearing study is focused primarily on evaluating the effects of acute noise exposure, historical data can be helpful to determine if subjects have experienced previous hearing changes due to noise exposure. In the CHASMPAS study, retrospective data will be pulled from DoD occupational health and medical record databases; these data will include hearing health-related international classification of diseases (ICDs) codes and test results, as well as demographic information (e.g., type of job, years of each job experience). These data can capture medical history of any audiologic condition to include previous tinnitus, TTS, PTS, or STS documented in either the Military Health System (MHS) Data Repository (MDR) or Defense Occupational and Environmental Health Readiness System–Hearing Conservation (DOEHRS-HC) Data Repository (DR). Analysis of such data may reveal who is more or less at risk to incur auditory damage based on previous hearing loss, tinnitus, noise-exposure, etc.
These data will be accessed through the DoD Hearing Center of Excellence interface registry known as the Joint Hearing and Auditory System Injury Registry (JHASIR), following the governance of that system, which includes applicable data-sharing agreements, and the submission of IRB-approved protocols. Subjects who enroll in the CHASMPAS study will be asked to authorize access to their currently existing hearing data in these databases, as well as access to any additional hearing-health-related data that may be added to these databases within one year of study enrollment. This will allow us to obtain access to the next regularly scheduled hearing conservation audiogram that occurs after their participation in the study.
IX. CONCLUSIONS
The task of characterizing acute hearing changes in noise/blast-exposed military personnel can be complex and challenging; however, technology and methods do exist to accomplish such an undertaking. Given the risk impaired hearing may introduce to military operational environments, it is crucial to better understand the relationship between TTS and PTS in human listeners. Boothless audiologic technology will allow for the timely measurement of hearing performance in the field with the potential to capture changes in hearing that might otherwise be missed or detected at lower degrees of severity by less expedient methods. The ability to characterize high-noise/blast exposure and resultant hearing changes will advance the understanding of the dose-response relationship between noise/blast exposure and hearing (acuity and performance) changes. It is hoped that all of this knowledge will inform and support improvements to acoustic standards and the development of prevention tools such as noise/blast overpressure maps, new hearing protection, and conservation strategies, and possibly pharmaceutical interventions such as otologic prophylactic and rescue agents.
ACKNOWLEDGMENTS
This material is based upon work supported by the Department of the Army under Air Force Contract No. FA8702-15-D-0001, and is also supported by the Defense Health Agency. This material is approved for public release and distribution is unlimited. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Army, the Hearing Center of Excellence, Defense Health Agency, DoD, or United States Government. The authors would like to thank the special issue committee for inviting this article into the JASA 2019 collection, Odile Clavier, PhD, and Creare LLC for their contributions and support regarding the WAHTS technology, as well as Richard Bauman, PhD, and the CONQUER Operational Team for their support with blast/impulse data collection. Additionally, the authors would like to thank Nicole Larionova for her editing and formatting support.