This study investigates loudness perception in real-world contexts using predictors related to the sound, situation, or person. In the study, 105 participants recorded 6594 sound environments in their homes, which were then evaluated based on the Experience Sampling Method. Hierarchical linear regressions using a loudness level based on ISO 532-1 allowed for obtaining the best model fits for predicting perceived loudness and explaining the highest variance. LAeq and LAF5 provided comparable results and may require less computational effort. However, the analysis shows that only one-third of the variance explained by fixed effects was attributable to the loudness level. Sixteen percent stemmed from perceived properties of the soundscape; 1% were attributable to relatively temporally stable, person-related predictors like participants' age; non-auditory situational predictors made no additional contribution. The results thus did not confirm previous findings on loudness perception under laboratory conditions, emphasizing the importance of the situational context. Along with the current paper, a comprehensive dataset, including the assessed person-related, situational, and sound-related measures as well as LAeq time-series and third-octave spectrograms, is provided to enable further research on sound perception, indoor soundscapes, and emotion.

Perceived loudness, understood as the magnitude of the auditory sensation a listener experiences when exposed to sound, is a leading aspect in noise research seeking to maintain and promote people's health and living environments' quality (De Coensel , 2003). In standardization and noise control, aggregated simple acoustic measures are used as loudness approximations that can be gathered and calculated time- and cost-effectively, such as the energetically averaged equivalent continuous sound pressure level (see DIN 1320:2009, DIN, 2009; European Commission, 2000; Stallen , 2008). Various frequency and time weightings are applied to emulate human auditory processing. Some of these weightings have been developed for specific signal types and different sound pressure levels, which limits their application when sound levels or sound types change significantly during the observed period, such as an aircraft flyover or a passing truck in an otherwise quiet residential area (DIN, 2014; ISO, 2010; ISO, 2003; ISO, 2009; ITU, 2015). Therefore, penalties are applied to further optimize these loudness predictions. These account for signal characteristics (e.g., strong tonal components), the type of sound (e.g., the kind of traffic or the presence of predominant low-frequency components on the one hand), and context (e.g., the time of the day on the other hand) (ISO, 2003; Bundesministerium für Umwelt, Naturschutz und Reaktorsicherheit, 1998; DIN, 2020; DIN, 2005; ANSI, 2005; DIN, 1996; DIN, 2012).

Parallel to that mentioned previously, decades of research based on psychoacoustic experiments has developed several complex loudness prediction models to mimic the complex human auditory system (e.g., the outer ear canal, frequency-place transformations of the basilar membrane, critical bands, and masking effects). In addition, these account for signal properties like the sound level. For example, ISO 532-1 (ISO, 2017a), a refined version of DIN 45631/A1 (DIN, 2010), incorporates the method developed by Zwicker, providing loudness predictions for both stationary and time-varying sounds using third-octave filtering. Another standardized method for stationary sounds, derived by Moore (2016) makes further refinements. For example, it uses the ERBN scale (i.e., equivalent rectangular bandwidth of the auditory filter) and a filter bank of higher resolution for representing the middle ear and calculating the excitation pattern on the basilar membrane (S3.4-2007 from ANSI, 2007; Moore and Glasberg, 1996), and binaural inhibition (ISO 532-2 from ISO, 2017b; Moore and Glasberg, 2007). The recent standard ISO 532-3 (ISO, 2022) also applies that method for time-varying sounds (ISO, 2017a; Moore , 2016). Because higher computational costs accompany Moore's improvements, some studies have offered suggestions to hasten calculations (Ward , 2013; Ward , 2015; Swift and Gee, 2020; Schlittenlacher , 2020). However, these have not been part of the implemented standards until now.

All of the previously mentioned loudness predictions share recent development and validation based on laboratory experiments with synthetic signals of durations of only a few seconds (e.g., Moore and Glasberg, 2007; Rennies , 2010; Fiebig and Sottek, 2015; ISO, 2017a) or everyday sounds taken out of context (ISO, 2017a; Rennies , 2013). Furthermore, this approach resulted in high agreement with participants' loudness ratings (Meunier , 2000).

Increasingly, in addition to laboratory experiments, various field studies on noise distribution and perception have aimed to increase their results' ecological validity. The acoustic environment where the participants reside is either approximated using noise maps or determined using stationary microphones installed outdoors or indoors, dosimeters (worn by participants during the day), or smartphones with built-in or external microphones. Some of the studies examine noise distribution in urban areas (e.g., Murphy and King, 2016; Ventura , 2018; Radicchi, 2019), while others investigate the annoyance and pleasantness of urban soundscapes in general (Craig , 2017; Steffens , 2017; Picaut , 2019)—or, of particular relevance, aircraft noise (Bartels , 2015) or perceived loudness (Beach , 2012). For example, in a field study by Beach (2012), participants assessed perceived loudness during 48 h of their daily activities using real-time audio recordings. As a result, the correlations calculated for each participant between perceived and acoustical loudness predictors had a mean of r = 0.56. Thus, the predictive power observed in laboratory studies decreases when real-world context, personal characteristics, and socio-economic factors come into play in field studies. These findings suggest that situation and person play pivotal roles in individual perception.

In field studies, participants' evaluations are typically collected using questionnaires, interviews, diaries by pen and paper, or survey apps on smartphones. Even if some efforts are necessary with smartphones to avoid errors due to high self-noise (i.e., a small signal-to-noise ratio for quiet soundscapes), wind noise, and uncalibrated recordings (Picaut , 2019; Ventura , 2017), they offer crucial advantages. For example, smartphone usage allows participants to measure acoustic characteristics and capture perceptual ratings of sound or contextual properties on the same device.

Research has revealed several indications of non-auditory contextual factors in human sound perception, both related to annoyance and perceived pleasantness (Bartels , 2015; Spilski , 2019), but also concerning loudness perception in particular (Fastl and Florentine, 2011; Stallen , 2008). Guski's (1999) theoretical model for noise annoyance quantifies the multitude of influences on sound perception by suggesting that sound characteristics can explain only one-third of the variance in noise annoyance ratings. Another third of this variance may result from non-auditory factors—individual, personal, situational, or social. Additional studies reveal an even lower impact of the sound itself. For example, Michaud (2016) and Bartels (2015) identified 9% and 14% of the variance explained due to sound, respectively; the latter additionally found that 14% of the variance was due to the context of the situation (Bartels , 2015). Person-related and socio-economic predictors, which tend to be relatively stable over time, seem to contribute little to annoyance or lead to contradictory results (see Torresin , 2019, for a review). This might indicate their minor importance but could suggest that the relevant person-related predictors have not yet been found.

While influencing factors on annoyance have been intensively investigated (e.g., Alimohammadi , 2010; Bartels , 2015; Michaud , 2016; Steffens , 2020; Versümer , 2020; Benz , 2021; Hasegawa and Lau, 2021; Moghadam , 2021), only a few studies revealed non-auditory time-varying effects on loudness perception. First, the listener's emotional state appears prominent in loudness perception—that is, a more positive affective state is associated with lower loudness perception (Siegel and Stefanucci, 2011; Asutay and Västfjäll, 2012) and higher perceived pleasantness (Steffens , 2017; Torresin , 2019; Västfjäll, 2002). Second, high concentration on only one cognitive task may shield against acoustic distractions (Sörqvist , 2016; Halin, 2016), which could explain why people performing a cognitive task perceive sounds less loudly (Aletta , 2016). This accords with findings from a laboratory study in which subjects rated environmental sounds as 7% less loud and 6% more pleasant during high (compared to low) cognitive load (Steffens , 2019).

Beyond the effect of state affect and cognitive load, exploring the effects of other possible influencing predictors often addressed in annoyance research may be helpful, e.g., noise sensitivity and control over the situation or the sound source (Torresin , 2022; Sun , 2018; Sung , 2017; Pennig and Schady, 2014; Schreckenberg , 2018; Kroesen , 2008). Additionally, the question of which predictor contributes how much to the variance explained in annoyance and loudness perception remains unresolved.

Based on the state of the research described previously, this study addressed the following research questions:

RQ1. Auditory predictors

Do complex auditory loudness models outperform simple acoustical loudness measures (without penalties) in real-world scenarios with the original context of the sound environment?

RQ2. Influence of three domains of predictors

What is the influence of the three domains, the sound field (1), the non-auditory time-varying effects that change rather quickly from situation to situation (2), and the non-auditory, relatively temporally stable person-related and socio-economic effects (3), on the perceived loudness?

RQ3. Influence of individual predictors

What is the influence of each predictor within these three domains on human loudness perception in real-world scenarios?

A field study based on the Experience Sampling Method was conducted in participants' dwellings to answer these research questions and overcome some of the challenges mentioned before. First, using specially designed, binaural audio recording devices with low self-noise and a survey app on a smartphone, participants reported on their acoustic environment and the situation at hand multiple times a day. Then, based on the gathered data, models were developed that predict perceived loudness based on different acoustic measures and perceptual ratings of the sound, situational aspects, and personal characteristics.

Table I depicts the characteristics of the participants in this study, set during the summer of 2021. Newspaper articles, social media posts, local radio and television broadcasts, and friends and acquaintances facilitated contact and recruitment of participants. The exclusion criteria for participants were if they planned to be away from home for more than two days during the 10-d participation period, could not report five times a day, or used hearing aids. Two participants dropped out for these reasons. In addition, one participant's results were excluded because the records were not linkable to the assessments (due to unsystematic time differences of more than 60 min).

TABLE I.

Sample description regarding socio-demographic, socio-economic, and potential hearing impairments.

Frequency Age in years (rounded)
Absolute Relative M Standard deviation (SD)
Participants  105  100%    36  14 
… women  57  54%    35  14 
… men  48  46%    37  15 
… non-binary  0%       
… living alone  29  28%    37  14 
… living with others  76  72%    36  14 
… living with children  16  15%    39  13 
… living without children  89  85%    35  14 
… having neighborsa  99  94%  100%  36  14 
 … next door  82  78%  83%  36  14 
 … below  59  56%  60%  34  12 
 … above the participant's dwelling  52  50%  53%  37  13 
… having hearing impairmentsb           
… none  63  60%    31  11 
… mild  31  30%    40  13 
… moderate  11  10%    54  14 
… living in a household of           
… 1 person  29  27%    37  14 
… 2 persons  45  43%    38  14 
… 3 persons  21  20%    30  13 
… 4 persons  8%    33  14 
… 5 persons  2%    53 
Frequency Age in years (rounded)
Absolute Relative M Standard deviation (SD)
Participants  105  100%    36  14 
… women  57  54%    35  14 
… men  48  46%    37  15 
… non-binary  0%       
… living alone  29  28%    37  14 
… living with others  76  72%    36  14 
… living with children  16  15%    39  13 
… living without children  89  85%    35  14 
… having neighborsa  99  94%  100%  36  14 
 … next door  82  78%  83%  36  14 
 … below  59  56%  60%  34  12 
 … above the participant's dwelling  52  50%  53%  37  13 
… having hearing impairmentsb           
… none  63  60%    31  11 
… mild  31  30%    40  13 
… moderate  11  10%    54  14 
… living in a household of           
… 1 person  29  27%    37  14 
… 2 persons  45  43%    38  14 
… 3 persons  21  20%    30  13 
… 4 persons  8%    33  14 
… 5 persons  2%    53 
a

Multiple responses were possible.

b

For a description of the definition used for the hearing impairment, see Sec. II B 4.

The experimental field study was based on the Experience Sampling Method, previously developed to study what people do, feel, and think during daily activity (Larson and Csikszentmihalyi, 2014). Regarding soundscapes, participants periodically perform momentary judgments of the acoustic environment, the surrounding situation, and their emotional state throughout the day while naturally acting in their everyday environment (Steffens , 2015).

For each participant, the study lasted ten consecutive days. Participants were asked to record and assess their acoustic environment at their homes on an hourly basis to achieve the target of 70 assessments. Because participants were required to submit multiple reports during it, the study used a mixed within- and between-subjects design. There was no manipulation or intervention to interfere with the participants' everyday lives as little as possible.

The predictors assessed stemmed from three domains. First, the sound-related domain includes acoustic predictors (Sec. II B 1) and perceptual predictors (Sec. II B 2) derived through participant judgments. Second, the situational domain of non-auditory time-varying predictors (Sec. II B 3) for characterization of the framing situation. It includes affective measures (also person-related), whose values vary considerably from one situation to another. Third, the person-related domain comprises non-auditory, relatively temporally stable predictors (Sec. II B 4) that are tied to the person, such as age, noise sensitivity, or socio-economic predictors.

The acoustic predictors were calculated from binaural recordings (Sec. II C), while perceptual and situational predictors were assessed using a smartphone survey app (Sec. II C). In addition, this study adapted standardized and established assessments (single-choice, analogue sliders, and Likert scales, non-randomized) for frequent use on small smartphone displays. Next, the person-related and socio-economic predictors were assessed using a tablet-and-pen questionnaire with closed questions. Finally, the multi-item questionnaires' Likert scales (non-randomized) were each presented in a matrix; summation or averaging of the values of interrelated items yielded the predictors' values.

All German questionnaires are available with a translation to English in the supplementary material,1 taken from the published dataset (Versümer , 2023).

1. Sound-related acoustic predictors

First, the binaural recordings were calibrated and filtered (see Sec. II C for more details) to calculate the three acoustic predictors (Table II) used in the statistical analysis (Sec. II E). Second, based on the ISO 532-1 (ISO, 2017a) algorithm for time-varying sounds, the instantaneous loudness was calculated for each channel of each recording assuming diffuse-field conditions. Then, the loudness level LLz(P) was calculated, as Kuwano (2013) suggested. This procedure aligns with work by Schlittenlacher (2017), where the LLz(P) showed higher correlations with overall loudness ratings of binaural recordings of real sound environments than the N5 percent exceedance level. Finally, from the two loudness levels derived from the binaural recordings, the highest value was chosen as a single value representing the whole recording, following ISO 12913-3 (ISO, 2021). This value serves as the Predicted loudness level (named LPL in this study) for all analyses presented; for comparison, the LAeq and LAF5 (see ISO 1996-1:2003; ISO, 2003) were also calculated. Again, the highest value of both binaural channels was used.

TABLE II.

Sound-related acoustic or perceptual predictors and their ranges or factor levels. Reference level (in bold) of Salient source ownership: Other. For question texts, scale types, and original German texts, see the supplementary material (Ref. 1) taken from the published dataset (Versümer , 2023).

Variable name Range or levels
Sound 
Dependent variable  Perceived loudness  [1, 50] 
Predicted auditory loudness level  Predicted loudness   
A-weighted equivalent continuous sound pressure level  LAeq   
A-weighted five percent exceedance level  LAF5   
Preference of the most salient sound  Salient sound preference  very disinclined — very inclined [−3, 3] 
Whom the sound source of the most salient sound belongs to  Salient source ownership  1 Participant or his family (yes) 
2 Others (no) 
Pleasantness of the indoor soundscape  Soundscape pleasantness  [−10, 10] 
Eventfulness of the indoor soundscape  Soundscape eventfulness  [−10, 10] 
Soundscape composition: Cumulated sensitivity of all audible  SC Nature  not noticeable at all — extremely noticeable; [0, 10] 
sounds of each of the eight noise categories  SC Human  not noticeable at all — extremely noticeable; [0, 10] 
SC Household (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Installation (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Signals (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Traffic (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Speech  not noticeable at all — extremely noticeable; [0, 10] 
SC Music/singing  not noticeable at all — extremely noticeable; [0, 10] 
Variable name Range or levels
Sound 
Dependent variable  Perceived loudness  [1, 50] 
Predicted auditory loudness level  Predicted loudness   
A-weighted equivalent continuous sound pressure level  LAeq   
A-weighted five percent exceedance level  LAF5   
Preference of the most salient sound  Salient sound preference  very disinclined — very inclined [−3, 3] 
Whom the sound source of the most salient sound belongs to  Salient source ownership  1 Participant or his family (yes) 
2 Others (no) 
Pleasantness of the indoor soundscape  Soundscape pleasantness  [−10, 10] 
Eventfulness of the indoor soundscape  Soundscape eventfulness  [−10, 10] 
Soundscape composition: Cumulated sensitivity of all audible  SC Nature  not noticeable at all — extremely noticeable; [0, 10] 
sounds of each of the eight noise categories  SC Human  not noticeable at all — extremely noticeable; [0, 10] 
SC Household (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Installation (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Signals (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Traffic (technical)  not noticeable at all — extremely noticeable; [0, 10] 
SC Speech  not noticeable at all — extremely noticeable; [0, 10] 
SC Music/singing  not noticeable at all — extremely noticeable; [0, 10] 

2. Sound-related perceptual predictors

Table II displays all sound-related perceptual predictors. First, as the dependent variable, the Perceived loudness of the sound environment was determined by a categorical loudness scale that allows for an intuitive and verbally anchored assessment with the following five scale levels: very low-level (0), low-level (10), medium (20), loud (30), very loud (40). Second, due to the design capabilities of the smartphone app used for gathering the ratings of the sounds and situations, the categorical verbal scale was subdivided (Heller, 1990; ISO, 2007). Here, a numerical scale from 1 to 10 allowed for finer differentiation (Fig. 1); summing both scales yielded the interval scale of Perceived loudness, which ranges from 1 to 50. Third, participants further rated their perception of the sound environment by evaluating Soundscape pleasantness and Soundscape eventfulness. These were determined by combining eight items—pleasant, annoying, eventful, uneventful, chaotic, calm, vibrant, and monotonous [see Eqs. (1) and (2)]. Finally, participants indicated the extent to which they agreed with each item regarding the acoustic environment using five-level Likert scales in adaption to the soundscape standard (ISO, 2021) and the translation into German language (Aletta , 2020, Table III):
(1)
(2)
FIG. 1.

(Color online) Screenshot of the assessment of Perceived loudness of the indoor acoustic environment using a combination of a verbal categorical and a numerical scale for partitioning when using a survey app on a smartphone.

FIG. 1.

(Color online) Screenshot of the assessment of Perceived loudness of the indoor acoustic environment using a combination of a verbal categorical and a numerical scale for partitioning when using a survey app on a smartphone.

Close modal

Additionally, participants described the sound environment as a composition of sounds by assigning all audible sounds to one of the eight sound categories. Participants then indicated the saliency of each category using 11-level Likert scales (refer to Soundscape composition in Table II and Fig. 2).

FIG. 2.

(Color online) Smartphone screenshot displaying the assessment of the Soundscape composition, for which participants rated the perceived accumulated salience of all sound sources related to the eight sound categories: Nature, Human, Household appliances (indoors), house Installation/heating/ventilation (indoors), Signals/ringing tones/alarms/information tones, Traffic/construction work/industry (outdoors), Speech, and Music/singing.

FIG. 2.

(Color online) Smartphone screenshot displaying the assessment of the Soundscape composition, for which participants rated the perceived accumulated salience of all sound sources related to the eight sound categories: Nature, Human, Household appliances (indoors), house Installation/heating/ventilation (indoors), Signals/ringing tones/alarms/information tones, Traffic/construction work/industry (outdoors), Speech, and Music/singing.

Close modal

Finally, participants identified the most salient sound of the acoustic environment and rated its Salient sound preference on a verbal seven-level Likert scale ranging from very disinclined to very inclined. Then, they reported the Salient source ownership, i.e., who owned the associated sound source. Due to the highly heterogeneous distribution, which would have caused problems in statistical evaluations, the original four response options—You personally, Your family, Your neighbors, and Someone else/something else/unknown—were aggregated into two categories: the Participant or his or her family (yes) or Others (no).

3. Situational predictors

Of the non-auditory time-varying predictors changing from situation to situation (Table III), the momentary affective state was described by Valence and Arousal using the circumplex model of affect (Posner , 2005). This state was assessed directly after the questionnaire's start, with one continuous slider each; this slider defaulted to the middle of the scale and had to be touched or moved to proceed to the next question. In addition to Arousal, Wakefulness was assessed with a continuous slider to capture the perceived level of feeling of tiredawake (in the style of Steyer, 1997; Steyer , 1997a) because these yielded descriptions of different—though not independent—dimensions (Hinz , 2012). This distinction is meaningful when, for example, one person feels tired (i.e., low alertness level) yet experiences a high activation level because they must complete a time-sensitive or important task while exhausted. Another meaningful example is if a person feels tired yet relaxed (i.e., low arousal level) because of a relaxing, upcoming free weekend after a busy work week.

TABLE III.

Situational predictors and their ranges or factor levels. In bold: the factor level containing the most reports. For questions and original German texts, see the supplementary material (Ref. 1) taken from the published dataset (Versümer , 2023).

Variable name Range or levels
Situation     
State affect  Valence  Negative mood — neutral — positive mood; [−5, 5] 
Arousal  No activation — strong activation; [0, 10] 
Wakefulness  Tired/limp — awake/chipper; [−5, 5] 
Perceived control over the sound situation  Control  No control — complete control; [1, 7] 
The cognitive and physical load imposed by the activity  Cognitive load  Very little — very much; [0, 10] 
  Physical load   
The activity immediately before reporting  Activity  1 Cooking/housework/workout 
  2 Concentrated mental work 
3 Social interaction 
4 Sleeping/relaxing 
Recording time (in hours)  RT DEN  [07, 19] Day 
[19, 23] Evening 
[23, 07] Night 
Variable name Range or levels
Situation     
State affect  Valence  Negative mood — neutral — positive mood; [−5, 5] 
Arousal  No activation — strong activation; [0, 10] 
Wakefulness  Tired/limp — awake/chipper; [−5, 5] 
Perceived control over the sound situation  Control  No control — complete control; [1, 7] 
The cognitive and physical load imposed by the activity  Cognitive load  Very little — very much; [0, 10] 
  Physical load   
The activity immediately before reporting  Activity  1 Cooking/housework/workout 
  2 Concentrated mental work 
3 Social interaction 
4 Sleeping/relaxing 
Recording time (in hours)  RT DEN  [07, 19] Day 
[19, 23] Evening 
[23, 07] Night 

Cognitive load and Physical load were each assessed to examine the possible dependence of loudness ratings on the task participants were engaged in before each poll. This process was facilitated by an adaption of the NASA Task Load Index, developed to examine the performance of individuals driving or operating machines, vehicles, or aircrafts (NASA TLX; Hart, 2006). In addition, participants answered the question, “How much control do you personally have over the sound situation you report?” by reporting their perceived Control over the sound situation using a verbal, seven-level Likert scale that ranged from no control to complete control. Finally, participants reported the Activity they were engaged in immediately before conducting the hourly assessment using a nominal scale with four response options: cooking/housework/workout, concentrated mental work, social interaction, and sleeping/relaxing. The recording times from the time stamp saved with each assessment were clustered into three time ranges according to ISO 1996-2:2017 (ISO, 2017c), day (07 to 19 h), evening (19 to 23 h), and night (23 to 07 h).

4. Person-related and socio-economic predictors

Concerning the non-auditory, relatively temporally stable person-related and socio-economic predictors (Table IV), mean Noise sensitivity was measured using the German version (see Table II.9 of Eikmann , 2015) of the NoiSeQ-R (Schütte , 2007). This was accomplished by averaging the three subscales for noise sensitivity regarding sleep, work, and habitation, queried using four items, each with four-level Likert scales. Participants' Hearing impairment was measured for both ears across the octaves from 250 Hz to 8 kHz using the Audiometer (HEAD acoustics GmbH, Herzogenrath, Germany) with Sennheiser HDA-300 headphones (Sennheiser, Wedemark, Germany) using the value of the most hearing loss for both ears in each frequency band. In addition, the perceived general Health status was assessed using a single-item question: “How in general would you rate your health?” and a five-level scale ranging from bad to good. Single-item measures for the general health status yield a sufficient measure for research when briefness is adequate and different health aspects are not of particular interest (Idler and Benyamini, 1997; Bowling, 2005; Radun , 2019).

TABLE IV.

Person-related and socio-economic measures and their ranges or factor levels. In bold: the factor level containing the most reports. For question texts, scale types, and original German texts, see the supplementary material (Ref. 1) taken from the published dataset (Versümer , 2023).

Variable name Range or levels
Person 
Demographics  Age  [0, ∞] 
Gender  Female (f), male (m), non-binary (d) 
Mean noise sensitivitya  Noise sensitivity  low — high; [0, 3] 
Hearing impairment  Hearing impairment  0 less than or equal to 20 dB HL 
  1 over 20 and to 35 dB HL 
  2 over 35 dB HL 
Health, well-being, and general anxiety disorder  Health  Bad — good; [1, 5] 
Well-being  Worst — best imaginable well-being; [0, 100] 
Anxiety  [0, 21] 
Three-dimensional person mood traitsa  Trait mood  Good — bad; [8, 40] 
  Trait wakefulness  Awake — tired; [8, 40] 
Trait rest  Calm — nervous; [8, 40] 
Socio-economics/living environment  Neighbors above  Yes, no 
  Neighbors below  Yes, no 
Neighbors next door  Yes, no 
Children  Yes, no 
People in household  [1, 10] 
Variable name Range or levels
Person 
Demographics  Age  [0, ∞] 
Gender  Female (f), male (m), non-binary (d) 
Mean noise sensitivitya  Noise sensitivity  low — high; [0, 3] 
Hearing impairment  Hearing impairment  0 less than or equal to 20 dB HL 
  1 over 20 and to 35 dB HL 
  2 over 35 dB HL 
Health, well-being, and general anxiety disorder  Health  Bad — good; [1, 5] 
Well-being  Worst — best imaginable well-being; [0, 100] 
Anxiety  [0, 21] 
Three-dimensional person mood traitsa  Trait mood  Good — bad; [8, 40] 
  Trait wakefulness  Awake — tired; [8, 40] 
Trait rest  Calm — nervous; [8, 40] 
Socio-economics/living environment  Neighbors above  Yes, no 
  Neighbors below  Yes, no 
Neighbors next door  Yes, no 
Children  Yes, no 
People in household  [1, 10] 
a

Six participants left one of the multiple items of a trait scale blank, resulting in missing values in the data set. The mean value of the remaining items of the same scale replaced these values.

Then, the participants averaged over the past two weeks their impression of the following items: Psychological Well-being was assessed using a German version of the WHO-5 questionnaire (Bech, 1999; Topp , 2015) which serves as a valid and internationally accepted time-efficient measure (e.g., used in an investigation of screening characteristics for depressed mood in type 1 diabetic patients by Kulzer , 2006). Accordingly, the answers to five six-level Likert scales were added, ranging from At no time (0) to All of the time (5). Participants assessed their state of Anxiety using the GAD-7 questionnaire, developed to be a reliable measure for anxiety disorders (Kroenke , 2007) and which was transferred into German via verified translation by back-translation for this study. Therefore, this study added seven four-level Likert scales, ranging from Not at all (0) to Nearly every day (3). Finally, the German Multidimensional Mood State Questionnaire (for the English translation of the German “Mehrdimensionaler Befindlichkeitsfragebogen,” MDBF, see Steyer, 1997) served for measuring participants' three-dimensional mood (trait affect) of the past two weeks. It consists of three dimensions: Trait mood (good—bad), Trait wakefulness (awake—tired), and Trait rest (calm—nervous) (adapted from Steyer , 1997a). It is assessed by averaging eight verbal five-level Likert scales for each dimension. Finally, demographic [i.e., Age (in years), Gender (i.e., female, male, non-binary)] and socio-economic data (i.e., Neighbors living above/below/next door, number of People in [the] household, and Children being present or not) were collected.

The low-cost, low-self-noise binaural audio recording devices were developed at the University of Applied Sciences Düsseldorf specifically for this study. Both electret microphones fit into standard earbuds (with speaker and rubber plug removed and microphones pointing outward; see Fig. 3) and provided a low self-noise level of less than 19 dB(A), as shown in the spectra in Fig. 4. Participants placed the earbuds loosely in their cavum conchae (approximately ±90-degree azimuth) to ensure it was not hermetically sealed. All recordings, performed at a sampling frequency of 44.1 kHz and an amplitude resolution of 16 Bits, were saved to a memory card and archived and removed after each participation. Reference recordings were also made for calibration purposes. That is, both microphones and a sound level meter were placed next to each other in a closed wooden box, including a small loudspeaker (8 cm in diameter in a separate loudspeaker housing) to record a sine wave of 94 dB with a frequency of 100 Hz (to avoid acoustical modes). These two-channel recordings were used to calculate calibration factors applied to all a device's binaural recordings (i.e., all frequencies were adjusted equally). In addition, the filtering of all calibrated recordings compensated for the implemented analogue high-pass filter of the recording devices.

FIG. 3.

(Color online) Binaural recorder with microphones built into the earbuds from which the speakers and rubber plugs were removed. The simple user interface allows for switching the device on or off entirely with the black rocker power switch, starting a recording by pressing the red push button shortly, and stopping a running recording or deleting the last recording by pressing the push button for more than two seconds.

FIG. 3.

(Color online) Binaural recorder with microphones built into the earbuds from which the speakers and rubber plugs were removed. The simple user interface allows for switching the device on or off entirely with the black rocker power switch, starting a recording by pressing the red push button shortly, and stopping a running recording or deleting the last recording by pressing the push button for more than two seconds.

Close modal
FIG. 4.

(Color online) The third-octave spectra (A-weighted, 50% overlapping using a Hanning window) for one recording in the anechoic room of the University of Applied Sciences Düsseldorf. The red and black curve represents both channels of one low-self-noise binaural recording device developed at the university. The green curve represents a ½″ low-self-noise measurement microphone (type GRASS 47HC).

FIG. 4.

(Color online) The third-octave spectra (A-weighted, 50% overlapping using a Hanning window) for one recording in the anechoic room of the University of Applied Sciences Düsseldorf. The red and black curve represents both channels of one low-self-noise binaural recording device developed at the university. The green curve represents a ½″ low-self-noise measurement microphone (type GRASS 47HC).

Close modal

All perceptual sound-related and situational ratings were performed using a Nokia 4.2 smartphone (Nokia, Espoo, Finland) and a survey app (movisensXS, movisens GmbH, Karlsruhe, Germany) (movisens, 2020) that enables complex query procedures and makes the evaluated data immediately available to the study administration. Furthermore, the questions were presented in the same order for all participants, and the items were arranged identically. In other words, participants could only step forward and thus could not alter entries made on previous questionnaire pages.

Participants received a standardized introduction during individual appointments at the University of Applied Sciences Düsseldorf and completed an audiometry test (250 Hz–8 kHz octaves). Four trained instructors guided the participants through the questionnaire that obtained the person-related and socio-economic items. In addition, participants received a survey smartphone and a binaural audio recording device. Answers to frequently asked questions and general help regarding the study were provided before and made accessible throughout participation via the survey smartphones. A separate smartphone with a training survey also helped familiarize participants with their smartphone, the operation of the survey app, all possible rating scales and input types, and the help section. Finally, instructors introduced the participants to the structure of the Experience Sampling Method questionnaire, comprehension of the questions and scales, and the definitions of the eight sound categories.

At home, participants paired the study smartphones with their private Wi-Fi to ensure that their survey results were automatically uploaded. After starting the survey app, participants set the daily time range in which they liked to be asked to answer the questionnaire. Periodical alarms began approximately 15 min (±5 min) after the beginning of the daily time range, with additional ones following roughly every hour (±10 min). Participants could accept the alarm, delay it by five minutes twice, reject it altogether, or ignore it. Ignored notices were repeated twice. In cases where participants spent only a few hours per day at home or were absent for a few workdays or a weekend, they could independently initialize the assessment to reach the target number during their time at home. Despite tasking participants to conduct assessments on hourly reminders whenever possible, they were likelier to do so independently (73% were self-initiated). Thirty-five participants self-initiated more than 90% of their assessments, with 14 doing so in all cases.

Participants first indicated whether they could hear anything when accepting the alarm. If yes, the questionnaire would continue. Otherwise, it would terminate because the subsequent questions about the most prominent sound would be unanswerable and have resulted in undefined values that could complicate using the predictors in statistical analyses. Participants would then answer affective state questions to capture their emotional state, preferably independent of a possible emotional impact from responding to the survey. After activating the recording mode, the recording was delayed by 5 s to allow participants to breathe deeply and remain still and motionless during the 15-s recording. The delay also ensured that pressing the button would not become an audible part of the recording itself. The first of the three main sections followed—evaluating the most salient sound—which this contribution will not discuss in detail. In the second part, participants reported on the overall indoor sound environment. Finally, the third part dealt with the time-varying situational predictors, complementing the questions about the affective state asked at the beginning.

After their ten-day participation, participants received a staggered compensation of up to 100 Euros. They received 20 Euros for participating in the introduction at the university, 30 Euros for evaluating 45 sound situations, and 2 Euros for each additional contribution, but not more than 100 Euros for a total of 70 evaluations. Participants could report more beyond that without receiving further compensation.

Statistical analyses were performed using R (v4.2.0; R Core Team, 2022), Rstudio (v22.2.3.492; Rstudio Team, 2022), and jamovi (v2.2.5; The jamovi project, 2022). For the calculation of the prediction models, all variables, including dummy variables, were z-standardized to achieve equal weighting of the estimates. Linear mixed-effects models were calculated using the lme4 R package (v1.1-30; Bates , 2015). A hands-on introduction to linear mixed-effects models, built-in R, can be found in Winter (2013). All models consider the hierarchical data structure by clustering the data using the participant's ID, which results in random intercepts, i.e., each participant is assigned a different intercept value, which the model estimates. Generally, linear mixed-effects models also allow for random slopes by including predictors (already used as fixed effects) as random effects, i.e., for each participant, different slopes are determined for the predictors that serve as random effects. Specifically, the models used in this study were based on either the Predicted loudness level (LPL), the A-weighted equivalent continuous sound pressure level (LAeq), or the A-weighted five percent exceedance sound pressure level (LAF5). The single sound-related acoustic predictor served as the main fixed effect and the single random effect, allowing for random slopes regarding the sound-related acoustic predictor. Different sets of variables from the three domains (Table II–IV) are added successively.

The final fit was based on the restricted maximum likelihood estimation (REML). Satterthwaite approximation, implemented in the lmerTest R package (v3.1.3; Kuznetsova , 2017), estimated the degrees of freedom. Bootstrapped estimates were derived using 1000 iterations to improve the robustness of the models and because the data did not meet all requirements for linear regressions. From the resulting distributions of each estimate, p-values and confidence intervals were calculated at the significance level of α = 0.05. The statistical analysis made no adjustments (i.e., to reduce the family-wise error rate) because of the differences between the 15 models regarding the number of predictors. The analysis thus accepts possible inflation of type I errors, while ensuring test validity and no inflation of type II errors (Rothman, 1990). Marginal and conditional coefficients of determination (R2m and R2c) were calculated using the R package performance (v0.9.2; Nakagawa , 2017). Variance Inflation Factors (VIF) (Zuur , 2010) were calculated for each estimate using the car R package (v3.1–0; Fox and Weisberg, 2019) to ensure acceptable low correlations between the variables of each model (i.e., avoiding multicollinearity). In addition, model comparisons were conducted based on the Akaike Information Criterion (AIC). The fixed effects for the comprehensive models with the smallest AIC were then analyzed based on the probability value, the estimate's sign, and the absolute value to identify crucial predictors. Then, we calculated two two-way and one three-way interaction effects for selected predictors to investigate the influence of Arousal and Cognitive load on the effect of the Activity concentrated mental work on Perceived loudness. Finally, the interaction effects were added to the full model containing all main effects (model LPL.bpsp from Table V and Table VI) and presented in Table VII.

TABLE V.

Linear mixed-effects models based on the three sound-related acoustic predictors, Predicted loudness level (LPL), LAeq, and LAF5: .b, Baseline model with one acoustic predictor as the fixed and random effect (for the calculation of individual random slopes) and participant's ID as the cluster variable. .bp, Previous model plus predictors on participant's perceptual ratings of the perceived sound environment. .bps and .bpp, Previous model plus situational or person-related (and socio-economic) predictors. .bpsp, model with all predictors together. Variables are described in Tables II–IV, and Sec. II B. τ00, random intercept (between-subject) variance (i.e., variation between individual intercepts and average intercept). ICCadj, Adjusted intraclass-correlation coefficient = τ00/(τ00 + σ2), the proportion of the variance between participants over total variance. ρ01, correlation between random intercepts and random slopes. NID = 105. NObservations = 6594. α = 0.05.

Sound-related Non-auditory Both
Fixed and random effect LPL
Additional fixed effects Perception Perception, situation Perception, person Perception, situation, person
Model LPL.b LPL.bp LPL.bps LPL.bpp LPL.bpsp
Variances           
R2m  0.344  0.485  0.489  0.503  0.507 
R2c  0.527  0.638  0.638  0.639  0.640 
σ2  0.529  0.410  0.408  0.410  0.408 
τ00.ID  0.168  0.148  0.144  0.130  0.127 
τ11.ID.Calculated_Loudness  0.037  0.025  0.024  0.025  0.024 
ρ01  0.153  0.075  0.072  −0.020  −0.021 
ICCadj  0.279  0.296  0.291  0.273  0.269 
Model fit           
AICREML  14991.0  13425.3  13468.3  13498.3  13542.3 
VIFmax    1.97  2.75  4.00  4.00 
Sound-related Non-auditory Both
Fixed and random effect LPL
Additional fixed effects Perception Perception, situation Perception, person Perception, situation, person
Model LPL.b LPL.bp LPL.bps LPL.bpp LPL.bpsp
Variances           
R2m  0.344  0.485  0.489  0.503  0.507 
R2c  0.527  0.638  0.638  0.639  0.640 
σ2  0.529  0.410  0.408  0.410  0.408 
τ00.ID  0.168  0.148  0.144  0.130  0.127 
τ11.ID.Calculated_Loudness  0.037  0.025  0.024  0.025  0.024 
ρ01  0.153  0.075  0.072  −0.020  −0.021 
ICCadj  0.279  0.296  0.291  0.273  0.269 
Model fit           
AICREML  14991.0  13425.3  13468.3  13498.3  13542.3 
VIFmax    1.97  2.75  4.00  4.00 
Fixed and random effect LAF5
Additional fixed effects Perception Perception, situation Perception, person Perception, situation, person
Model LAF5.b LAF5.bp LAF5.bps LAF5.bpp LAF5.bpsp
Variances           
R2m  0.318  0.471  0.475  0.490  0.494 
R2c  0.509  0.625  0.626  0.626  0.627 
σ2  0.547  0.421  0.418  0.421  0.418 
τ00.ID  0.166  0.144  0.140  0.124  0.122 
τ11.ID.LAF5  0.046  0.029  0.028  0.029  0.028 
ρ01  0.291  0.178  0.180  0.105  0.103 
ICCadj  0.279  0.292  0.287  0.267  0.263 
Model fit           
AICREML  15221.2  13598.7  13631.4  13672.2  13706.1 
VIFmax    1.96  2.75  3.98  3.99 
Fixed and random effect LAF5
Additional fixed effects Perception Perception, situation Perception, person Perception, situation, person
Model LAF5.b LAF5.bp LAF5.bps LAF5.bpp LAF5.bpsp
Variances           
R2m  0.318  0.471  0.475  0.490  0.494 
R2c  0.509  0.625  0.626  0.626  0.627 
σ2  0.547  0.421  0.418  0.421  0.418 
τ00.ID  0.166  0.144  0.140  0.124  0.122 
τ11.ID.LAF5  0.046  0.029  0.028  0.029  0.028 
ρ01  0.291  0.178  0.180  0.105  0.103 
ICCadj  0.279  0.292  0.287  0.267  0.263 
Model fit           
AICREML  15221.2  13598.7  13631.4  13672.2  13706.1 
VIFmax    1.96  2.75  3.98  3.99 
Fixed and random effect LAeq
Additional fixed effects Perception Perception, situation Perception, person Perception, situation, person
Model LAeq.b LAeq.bp LAeq.bps LAeq.bpp LAeq.bpsp
Variances           
R2m  0.326  0.477  0.481  0.494  0.498 
R2c  0.517  0.632  0.632  0.632  0.633 
σ2  0.541  0.416  0.413  0.416  0.413 
τ00.ID  0.169  0.147  0.143  0.129  0.126 
τ11.ID.Laeq  0.044  0.028  0.027  0.028  0.026 
ρ01  0.301  0.194  0.192  0.123  0.118 
ICCadj  0.283  0.296  0.291  0.273  0.269 
Model fit           
AICREML  15146.5  13515.2  13556.5  13590.0  13632.3 
VIFmax    1.96  2.75  3.97  3.98 
Fixed and random effect LAeq
Additional fixed effects Perception Perception, situation Perception, person Perception, situation, person
Model LAeq.b LAeq.bp LAeq.bps LAeq.bpp LAeq.bpsp
Variances           
R2m  0.326  0.477  0.481  0.494  0.498 
R2c  0.517  0.632  0.632  0.632  0.633 
σ2  0.541  0.416  0.413  0.416  0.413 
τ00.ID  0.169  0.147  0.143  0.129  0.126 
τ11.ID.Laeq  0.044  0.028  0.027  0.028  0.026 
ρ01  0.301  0.194  0.192  0.123  0.118 
ICCadj  0.283  0.296  0.291  0.273  0.269 
Model fit           
AICREML  15146.5  13515.2  13556.5  13590.0  13632.3 
VIFmax    1.96  2.75  3.97  3.98 
TABLE VI.

Z-standardized estimates and probabilities for all effects of the models LPL.bp, LPL.bps, LPL.bpp, and LPL.bpsp. α = 0.05. Reference levels: RT DEN day; Hearing impairment 0; Gender female; Activity sleeping/relaxing. Bootstrapped confidence intervals are located in the supplementary material (Ref. 1).

Model LPL.bp LPL.bps LPL.bpp LPL.bpsp
Predictors β p β p β p β p
Predicted loudness  0.455  <0.001  0.447  <0.001  0.456  <0.001  0.447  <0.001  Time-varying  Sound-related, acousticSound-related, perceptual 
Salient sound preference  −0.036  <0.001  −0.042  <0.001  −0.036  <0.001  −0.042  <0.001 
Salient source ownership, yes  0.018  0.086  0.009  0.432  0.017  0.102  0.008  0.450 
Soundscape pleasantness  −0.229  <0.001  −0.237  <0.001  −0.229  <0.001  −0.237  <0.001 
Soundscape eventfulness  0.263  <0.001  0.261  <0.001  0.262  <0.001  0.261  <0.001 
SC Nature  0.043  <0.001  0.048  <0.001  0.044  <0.001  0.049  <0.001 
SC Human  −0.012  0.210  −0.011  0.242  −0.011  0.234  −0.011  0.262 
SC Household  0.074  <0.001  0.068  <0.001  0.075  <0.001  0.069  <0.001 
SC Installation  0.029  0.006  0.030  0.002  0.031  0.002  0.031  0.002 
SC Signals  0.041  <0.001  0.040  <0.001  0.042  <0.001  0.041  <0.001 
SC Traffic  0.097  <0.001  0.093  <0.001  0.098  <0.001  0.093  <0.001 
SC Speech  0.093  <0.001  0.090  <0.001  0.094  <0.001  0.091  <0.001 
SC Music  0.106  <0.001  0.100  <0.001  0.108  <0.001  0.101  <0.001 
Activity cooking/housework/workout  −0.038  <0.001      −0.037  <0.001  Situational 
Activity concentrated mental work  −0.053  <0.001      −0.053  <0.001 
Activity social interaction      −0.025  0.018      −0.025  0.018 
RT DEN evening      −0.014  0.078      −0.014  0.076 
RT DEN night      −0.010  0.290      −0.010  0.292 
Valence      0.005  0.654      0.005  0.644 
Arousal      0.029  0.018      0.029  0.018 
Wakefulness      −0.002  0.870      −0.002  0.852 
Control      0.047  <0.001      0.046  <0.001 
Cognitive load      0.059  <0.001      0.058  <0.001 
Physical load      0.010  0.392      0.009  0.446 
Age          −0.142  <0.001  −0.138  0.002  Temporally stable  Person-related 
Gender male          0.027  0.522  0.022  0.594 
Noise sensitivity          −0.080  0.052  −0.080  0.052 
Health          −0.087  0.052  −0.082  0.060 
Well-being          0.001  0.990  −0.007  0.918 
Anxiety          −0.106  0.136  −0.095  0.168 
Trait mood          −0.042  0.578  −0.038  0.602 
Trait wakefulness          0.111  0.050  0.106  0.056 
Trait rest          −0.074  0.272  −0.064  0.374 
Hearing impairment 1          0.086  0.042  0.081  0.060 
Hearing impairment 2          0.021  0.674  0.020  0.676 
Neighbors above, no          −0.037  0.352  −0.037  0.358  Socio-economical 
Neighbors below, no          0.036  0.406  0.032  0.454 
Neighbors next door, no          −0.090  0.012  −0.092  0.012 
Children, yes          −0.012  0.794  −0.015  0.758 
People in household          0.062  0.264  0.067  0.242 
Model LPL.bp LPL.bps LPL.bpp LPL.bpsp
Predictors β p β p β p β p
Predicted loudness  0.455  <0.001  0.447  <0.001  0.456  <0.001  0.447  <0.001  Time-varying  Sound-related, acousticSound-related, perceptual 
Salient sound preference  −0.036  <0.001  −0.042  <0.001  −0.036  <0.001  −0.042  <0.001 
Salient source ownership, yes  0.018  0.086  0.009  0.432  0.017  0.102  0.008  0.450 
Soundscape pleasantness  −0.229  <0.001  −0.237  <0.001  −0.229  <0.001  −0.237  <0.001 
Soundscape eventfulness  0.263  <0.001  0.261  <0.001  0.262  <0.001  0.261  <0.001 
SC Nature  0.043  <0.001  0.048  <0.001  0.044  <0.001  0.049  <0.001 
SC Human  −0.012  0.210  −0.011  0.242  −0.011  0.234  −0.011  0.262 
SC Household  0.074  <0.001  0.068  <0.001  0.075  <0.001  0.069  <0.001 
SC Installation  0.029  0.006  0.030  0.002  0.031  0.002  0.031  0.002 
SC Signals  0.041  <0.001  0.040  <0.001  0.042  <0.001  0.041  <0.001 
SC Traffic  0.097  <0.001  0.093  <0.001  0.098  <0.001  0.093  <0.001 
SC Speech  0.093  <0.001  0.090  <0.001  0.094  <0.001  0.091  <0.001 
SC Music  0.106  <0.001  0.100  <0.001  0.108  <0.001  0.101  <0.001 
Activity cooking/housework/workout  −0.038  <0.001      −0.037  <0.001  Situational 
Activity concentrated mental work  −0.053  <0.001      −0.053  <0.001 
Activity social interaction      −0.025  0.018      −0.025  0.018 
RT DEN evening      −0.014  0.078      −0.014  0.076 
RT DEN night      −0.010  0.290      −0.010  0.292 
Valence      0.005  0.654      0.005  0.644 
Arousal      0.029  0.018      0.029  0.018 
Wakefulness      −0.002  0.870      −0.002  0.852 
Control      0.047  <0.001      0.046  <0.001 
Cognitive load      0.059  <0.001      0.058  <0.001 
Physical load      0.010  0.392      0.009  0.446 
Age          −0.142  <0.001  −0.138  0.002  Temporally stable  Person-related 
Gender male          0.027  0.522  0.022  0.594 
Noise sensitivity          −0.080  0.052  −0.080  0.052 
Health          −0.087  0.052  −0.082  0.060 
Well-being          0.001  0.990  −0.007  0.918 
Anxiety          −0.106  0.136  −0.095  0.168 
Trait mood          −0.042  0.578  −0.038  0.602 
Trait wakefulness          0.111  0.050  0.106  0.056 
Trait rest          −0.074  0.272  −0.064  0.374 
Hearing impairment 1          0.086  0.042  0.081  0.060 
Hearing impairment 2          0.021  0.674  0.020  0.676 
Neighbors above, no          −0.037  0.352  −0.037  0.358  Socio-economical 
Neighbors below, no          0.036  0.406  0.032  0.454 
Neighbors next door, no          −0.090  0.012  −0.092  0.012 
Children, yes          −0.012  0.794  −0.015  0.758 
People in household          0.062  0.264  0.067  0.242 
TABLE VII.

Z-standardized estimates and probabilities for all significant effects of the full model LPL.bpsp including ArousalActivity, Cognitive loadActivity, and ArousalCognitive loadActivity interaction effects (for readability reasons only for the activity level concentrated mental work). Α = 0.05. Reference level: Activity sleeping/relaxing. The original models and bootstrapped confidence intervals can be found in the supplementary material (Ref. 1).

Model LPL.bpsp:A LPL.bpsp:CL LPL.bpsp:A:CL
Predictors β p β p β p
Predicted loudness  0.447  <0.001  0.446  <0.001  0.447  <0.001 
Salient sound preference  −0.042  <0.001  −0.043  <0.001  −0.042  <0.001 
Soundscape pleasantness  −0.237  <0.001  −0.235  <0.001  −0.237  <0.001 
Soundscape eventfulness  0.260  <0.001  0.263  <0.001  0.260  <0.001 
SC Nature  0.049  <0.001  0.050  <0.001  0.051  <0.001 
SC Household  0.068  <0.001  0.069  <0.001  0.068  <0.001 
SC Installation  0.031  0.002  0.033  0.002  0.030  0.002 
SC Signals  0.041  <0.001  0.042  <0.001  0.041  <0.001 
SC Traffic  0.093  <0.001  0.093  <0.001  0.092  <0.001 
SC Speech  0.090  <0.001  0.091  <0.001  0.090  <0.001 
SC Music  0.102  <0.001  0.102  <0.001  0.102  <0.001 
Activity cooking/housework/workout  −0.032  0.006  −0.036  0.010  −0.031  0.008 
Activity concentrated mental work  −0.059  <0.001  −0.066  <0.001  −0.058  <0.001 
Activity social interaction  −0.020  0.064  −0.018  0.136  −0.021  0.046 
Arousal  0.029  0.020  0.028  0.020  0.006  0.640 
Control  0.045  <0.001  0.047  <0.001  0.045  <0.001 
Cognitive load  0.059  <0.001  0.048  0.004  0.053  <0.001 
Age  −0.139  0.002  −0.137  0.002  −0.138  0.002 
Neighbors next door, no  −0.092  0.012  −0.091  0.012  −0.091  0.012 
Arousal * concentrated mental work  0.040  <0.001         
Cognitive load * concentrated mental work      0.032  0.016     
(Arousal * Cognitive load) * concentrated mental work          0.031  <0.001 
Model LPL.bpsp:A LPL.bpsp:CL LPL.bpsp:A:CL
Predictors β p β p β p
Predicted loudness  0.447  <0.001  0.446  <0.001  0.447  <0.001 
Salient sound preference  −0.042  <0.001  −0.043  <0.001  −0.042  <0.001 
Soundscape pleasantness  −0.237  <0.001  −0.235  <0.001  −0.237  <0.001 
Soundscape eventfulness  0.260  <0.001  0.263  <0.001  0.260  <0.001 
SC Nature  0.049  <0.001  0.050  <0.001  0.051  <0.001 
SC Household  0.068  <0.001  0.069  <0.001  0.068  <0.001 
SC Installation  0.031  0.002  0.033  0.002  0.030  0.002 
SC Signals  0.041  <0.001  0.042  <0.001  0.041  <0.001 
SC Traffic  0.093  <0.001  0.093  <0.001  0.092  <0.001 
SC Speech  0.090  <0.001  0.091  <0.001  0.090  <0.001 
SC Music  0.102  <0.001  0.102  <0.001  0.102  <0.001 
Activity cooking/housework/workout  −0.032  0.006  −0.036  0.010  −0.031  0.008 
Activity concentrated mental work  −0.059  <0.001  −0.066  <0.001  −0.058  <0.001 
Activity social interaction  −0.020  0.064  −0.018  0.136  −0.021  0.046 
Arousal  0.029  0.020  0.028  0.020  0.006  0.640 
Control  0.045  <0.001  0.047  <0.001  0.045  <0.001 
Cognitive load  0.059  <0.001  0.048  0.004  0.053  <0.001 
Age  −0.139  0.002  −0.137  0.002  −0.138  0.002 
Neighbors next door, no  −0.092  0.012  −0.091  0.012  −0.091  0.012 
Arousal * concentrated mental work  0.040  <0.001         
Cognitive load * concentrated mental work      0.032  0.016     
(Arousal * Cognitive load) * concentrated mental work          0.031  <0.001 

First, the verification of the statistical assumptions, the distributions of the acoustic measures and Perceived loudness, and the findings regarding the individual response patterns are described. Then, the results are presented concerning the three research questions.

All 15 models converged and are listed in Table V together with their model fit parameters, variances, and variance inflation measures. Model assumptions were inspected using Q-Q- and residual scatter plots.1 Normally distributed residuals can be assumed for most of the measured values, as seen in the mid-range of the Q-Q plots. However, the standardized residuals are not scattered randomly around the horizontal line and show patterns with negative trends, indicating a violation of the assumptions for linear regression.

Figure 5 displays the densities and histograms of the dependent variable and the three acoustic predictors. The values for Perceived loudness show a bimodal distribution, while those for the other acoustic predictors roughly correspond to a normal distribution.

FIG. 5.

(Color online) Density functions as a measure of the probability distribution of the z-standardized predictors of Perceived loudness (a), Predicted loudness LPL (b), LAeq (c), and LAF5 (d) of all 6594 recordings.

FIG. 5.

(Color online) Density functions as a measure of the probability distribution of the z-standardized predictors of Perceived loudness (a), Predicted loudness LPL (b), LAeq (c), and LAF5 (d) of all 6594 recordings.

Close modal

The analysis of the individual loudness response curves of the 105 participants revealed distinct answering patterns. In addition to the participants who generated a near-linear relationship between Perceived loudness and Predicted loudness, others used only the verbal Perceived loudness scale (leaving the numerical scale in the preselected middle position), and some used the full range of the Perceived loudness scale even though the Predicted loudness was exceptionally low for all their recordings (refer to SuppPub1.jpg from the supplementary material).1

Regarding RQ1 (Whether auditory loudness models outperform simple acoustic predictors in predicting Perceived loudness), the AIC values of the models based on auditory loudness (LPL.****2, Table V) are the lowest compared to those based on the other two acoustic predictors—LAeq and LAF5—indicating that the auditory loudness models provide the best fit. Conversely, the models based on the LAF5 showed the worst model fit. Therefore, the following results refer to the auditory loudness models.

Concerning RQ2 (To what extent do the three domains influence loudness perception?), the baseline model LPL.b with Predicted loudness as both the sole fixed effect and the sole random effect is first. As shown by Table V, 34% of the variance (R2m) was explained by the (single) fixed effect. In contrast, the same model (including the effects of the random intercept and the random slope) could explain 53% (R2c) of the variance in Perceived loudness ratings. For the model LPL.bp, which represents all predictors from the sound-related domain, the perceptual predictors of the sound ratings still had to be included. As a result, a substantial gain in variance explained was observed, with R2m increasing by 15 to 49% and R2c increasing by 11 to 64%.

Moreover, the non-auditory time-varying situational predictors from the second domain (model LPL.bps) do not contribute to the variance explained concerning R2m or R2c. However, if the situational predictors are replaced by the non-auditory, relatively temporally stable person-related and socio-economic predictors from the third domain (model LPL.bpp), an increase in R2m by 1% and an unchanged R2c can be observed. Finally, if all predictors described previously are considered (model LPL.bpsp), all model fit parameters stay almost unchanged compared to the model with temporally stable but without time-varying predictors (LPL.bpp). The maximum VIF value observed in each model does not exceed 4.0, suggesting that acceptable low multicollinearity, i.e., no overfitting, is present.

Regarding RQ3, Table VI presents the z-standardized regression coefficients (β) of the four more complex models with the smallest AIC values. In general, effects were significant at p < 0.05 for all LPL models discussed with comparable estimates. Concerning the sound-related domain, Predicted loudness showed the largest impact. Two significant perceptual ratings follow it: Lower Perceived loudness is related to higher Soundscape pleasantness, while higher Soundscape eventfulness is associated with higher Perceived loudness. Liking the most salient sound (Salient sound preference) is significantly associated with somewhat decreased Perceived loudness values. From the Soundscape composition, seven of the eight sound source categories showed significant effects on loudness ratings—only human sounds did not relate significantly to loudness perception. Higher loudness ratings were observed with higher saliency for all significant categories, ranging from minor effects from Traffic, Speech, and Music to minimal effects of sounds from Nature and domestic Installation and technical Signals.

The situational domain revealed a few time-varying significant but minimal effects: The strongest one, the (also person-related) Cognitive load, was higher at higher loudness ratings. The same applies to perceived Control and Arousal, which revealed a minimal positive effect on loudness ratings. Contrary to expectations based on previous findings, no significant relation was observed for state affect Valence. Moreover, all situation-related Activity categories showed minimal significant negative effects on loudness ratings compared to the reference activity, which was sleeping and relaxing. In contrast to expectations, no significant relation was observed for the recording time (RT DEN). From the domain of the temporally stable predictors, significantly related to the dependent variable were only the person-related Age and having no Neighbors next door as the single socio-economic predictor, both revealing somewhat lower Perceived loudness values with increasing Age and without neighbors next door.

The computed interaction effects are displayed in Table VII (only for the Activity level of concentrated mental work for readability reasons. Please see the supplementary material for interaction effects with all Activity levels1). When people do mental work instead of sleeping or relaxing, a significant small positive two-way interaction effect of 0.040 is observed for each standard deviation change in Arousal. That is, a change of the negative main effect of Activity concentrated mental work on Perceived loudness occurs in the positive direction (thereby possibly being compensated) when people's Arousal increases or negatively when people experience below-average Arousal. The same interaction effect can also be described while focusing on the small positive main effect of Arousal: It is intensified when people do mental work instead of sleeping or relaxing. Furthermore, the same occurred for the two-way interaction of Cognitive load and Activity concentrated mental work. Finally, a small positive three-way interaction of Activity concentrated mental work, Arousal, and Cognitive load could be observed, substantiating the coexistence of both two-way interaction effects, while the main effect of Arousal becomes non-significant.

The present field study based on the Experience Sampling Method conducted in domestic environments aimed to test laboratory-based loudness predictions in complex real-world contexts and to obtain crucial predictors from three domains, the sound-related domain, the time-varying situational domain, and the temporally relatively stable person-related domain, to predict loudness perception in everyday life.

RQ1. Auditory predictors

Results revealed that the energetically averaged loudness level (Kuwano , 2013) based on standardized auditory loudness models according to ISO 532-1 (ISO, 2017a) was a significantly better predictor of perceived loudness than less complex measures, such as LAF5 or LAeq. Since the improvement over both LAeq and LAF5, however, is relatively small, their more cost-effective use in common noise control might be justified.

RQ2. Influence of three domains of predictors

Looking broadly at which of the three domains influence loudness ratings to what extent, it appears as a confirmation of previous research that the predictive power of acoustic measures is limited, explaining only one-third of loudness ratings in everyday situations. For annoyance, which overlaps with loudness to a certain degree but is still a distinct concept (Stallen , 2008), a similar impact was expected from theoretical considerations (Guski, 1999) and found in a field study (Beach , 2012). Both higher (Spilski , 2019) and much lower variance explained was also reported (Michaud , 2016; Bartels , 2015; Spilski , 2019), emphasizing the general importance of context for sound assessments. Sound-related attributes describing the perceived character of the sound environment had a significant additional impact on perceived loudness, indicating the importance of those aspects of the sound field that are not (yet) described by acoustic measurements. Obviously, not only higher constructs but also elementary perceptual items such as the loudness of sound are influenced by cognitive processes related to type, meaning, and predictability (Fastl, 2001; Hellbrück , 2002; Stallen , 2008).

The same is true for attention. Whereas in laboratory studies the participants' full attention is mostly focused on the auditory stimulus, in everyday life, the focus may be taken up by other events and may also change from one moment to the next. This may also apply to background noise, which is fully considered by technical loudness measures, but may not be consciously perceived by people in everyday life (Meunier , 2000).

In this study, the situational, person-related, and socio-economic variables collected in the experiment explain only a small portion of the variance in the loudness ratings, although the related ICCadj values (Table V) indicate that roughly one-third of the differences in loudness ratings has to be attributed to the influence of the participants and their homes. Two aspects could be crucial for this unexplained rest: On the one hand, the influence of expectation, of attention, and, more generally, of the meaning we attribute to sounds, which, even through Experience Sampling, is difficult to capture empirically. One the other hand, these influences also seem to affect the evaluation of sounds and their loudness differently from person to person. This becomes apparent when looking at the difference between the marginal and conditional R2 values. Thus, the explained variance in the model calculated for all influencing variables (sound field, person, situation) increases from 51% to 64% if we allow for an individual interaction of these factors for each person.

Finally, some general limitations of the study might become visible at this point. For example, significant influencing variables could either not have been measured in the experiment or could have had a nonlinear effect on the loudness ratings collected.

RQ3. Influence of individual predictors

The influence of the perceived character of the sound on the loudness ratings falls in roughly equal parts on the two main standardized soundscape factors: pleasantness and eventfulness. Together with the acoustic measures (having the strongest effect), they can explain almost half of the variation in perceived loudness. In this study, participants rated both unpleasant and more eventful sounds as louder. The directions of these effects are essentially consistent with the results of a study on indoor soundscapes conducted by Torresin (2020). They extracted comfort and content (supplemented by the third component familiarity) instead of pleasantness and eventfulness as the two main principal components of the perceived sound in a mockup living room listening test. However, the different positions of the attributes proposed by ISO/TS 12913 (ISO, 2021) based on the work of Axelsson (2010) (e.g., pleasant, eventful, uneventful, chaotic, calm), with deviations from 6 to 38 degrees between the two circumplex models, indicate that the meaning of the two principal components is not identical. Hence, the use of attributes describing comfort and content could further improve the perception-related portion of indoor sound rating predictions in field studies.

As expected, older people reported lower loudness levels, which seems plausible given the age-related increase in hearing loss, at least for sounds that are not at the upper limit of the acoustic dynamic range, where hearing loss can lead to disproportionate sensitivity and loudness ratings (recruitment phenomenon). This may also explain why the assessed hearing impairment, defined in this study by participants' hearing threshold, did not show a significant main effect.

Similarly, the person-related noise sensitivity had no significant direct relation with loudness perception investigated in this study, which is in line with the inconsistent findings regarding this predictor in the literature (Miedema and Vos, 2003; Kroesen , 2008; Abbasi , 2021). It also coheres with a conclusion by Job (1999) that “noise sensitivity predicts objectively measured physiological reactivity to noise rather than only the introspective judgment of the effects of noise (reaction).” Moreover, recent research on indoor soundscapes suggests that noise sensitivity affects the perceived comfort of the acoustic environment only for specific combinations of activities and sound sources (Torresin , 2022), corroborating the results of this study. These findings also suggest that no generalizable association exists between noise sensitivity and perceived loudness.

Although some time-varying situational predictors were significant in the statistical model, their influence was negligible. Moreover, they did not increase the total variance explained (see the R2c values in Table V), suggesting that they drain predictive power from the sound- and person-related variables. Nevertheless, all activities were associated with lower-loudness perception than in the sleeping and relaxing condition. Physical and non-physical activities, e.g., exercise or concentrated mental work, seem to draw attention away from the environment's noise level and shield against auditory distractions (Sörqvist , 2016) leading to lower loudness perception. This mechanism works provided the cognitive load in the mental work is not too high. However, the shielding no longer seems effective in situations with high cognitive load and high arousal which leads to higher loudness perception, possibly from a point of leaving a state of “flow” (Csikszentmihalyi, 2014).

The results of this study did not confirm previous findings regarding a loudness attenuating influence of a positive affective state (Siegel and Stefanucci, 2011; Asutay and Västfjäll, 2012). Here, again, a difference between laboratory and field studies might become apparent, since in the latter not only attention is less exclusively focused on the stimulus, but also—especially in the home environment—there might be a much lower variance of mood states compared to what can be controllably achieved in laboratory studies.

Finally, in the domestic sound environments investigated, the perceived control over the real-world situation seems to play only a small role in loudness perception, in contrast to studies on aircraft noise which found that perceived control was a powerful predictor of different consequences of noise exposure (Hatfield , 2002). Thus, one's own control seems to compensate to some extent for annoyance (Torresin , 2022; Schreckenberg , 2018; Kroesen , 2008), but not for the evaluation of loudness, especially in the case of sounds that are consciously induced by the participants in the domestic environment, for example by persons listening to their own music.

In a field study collecting ratings on 6594 sound environments by 105 participants in their homes using the Experience Sampling Method, only one-third of the variance of perceived loudness could be explained by measured auditory loudness level. Perceived loudness was best predicted by the loudness level LPL based on ISO 532-1 (ISO, 2017a), closely followed by the A-weighted equivalent continuous sound pressure level LAeq and the A-weighted five-percent exceedance level LAF5, both of which are computationally less expensive, which may justify their use in everyday applications.

The explanation of perceived loudness could be significantly improved to about 50% by considering the sound character rated by the participants with soundscape attributes according to ISO/TS 12913-3 (ISO, 2021). Whereas high soundscape pleasantness and high preference of the most salient sound led to lower perceived loudness, high eventfulness led to higher perceived loudness.

Non-auditory, situational, person-related, and socio-economic influences played only a minor role in this study. Both physical and non-physical activities like doing concentrated mental work seem to draw attention away from the noise environment and lead to lower loudness perception, as long as the cognitive load in mental work is not too high, which makes people again vulnerable and leads to higher loudness perception. In general, however, perceived loudness seems to be less susceptible to non-auditory influences than annoyance.

Interestingly, some influences known from laboratory studies could not be identified in the field study conducted. This applies to the influence of the affective state of the participants and the perceived control over the sound environment. From our point of view, this underlines the value of field studies, in which a more natural manifestation of the investigated variables can be found. The Experience Sampling Method proved to be both powerful and easily manageable for such field studies and could be applied in the future to other domains than the domestic living environment studied here.

The authors thank Patrick Blättermann for his advice and constructive discussion of the study design, the statistical analysis, and the manuscript. Furthermore, thanks go to Fabian Rosenthal for calculating the acoustic predictors, assisting with data import, and discussing and further programming the questionnaires. Jenny Winter was responsible for the help texts, subject acquisition, and scheduling of subject introductions. The latter was carried out by Jan Roloff, Jenny Winter, Fabian Rosenthal, and S.V. Particular thanks go to Benjamin Müller, who co-developed the recording device and built the prototype as well as a 10-part small series. Likewise, special thanks go to Christian Epe for expert advice on the low-noise circuit design. The authors thank the three anonymous reviewers for their intensive reading of the manuscript and for their helpful suggestions. We finally thank the German Federal Ministry of Education and Research for funding this study. “FHprofUnt”-Funding Code: 13FH729IX6.

The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

The studies involving human participants were reviewed and approved by the Ethics Committee of the Medical Faculty of the University of Duisburg-Essen, Germany. All participants provided digital written informed consent by confirming the declaration on data collection and processing before participating.

S.V. designed and conducted the study, performed the statistical analysis, interpreted the data, wrote the initial manuscript, made revisions, and preprocessed and published the dataset. J.S. and S.W. contributed to the research questions, the design, and the statistical analysis. All authors reviewed the manuscript and approved the final version of the article.

The data of this study is openly available at https://doi.org/10.5281/zenodo.7858848. Real-time audio recordings cannot be made publicly available for privacy reasons. Please contact the authors.

1

See supplementary material at https://doi/org/10.1121/10.0019413 for three individual loudness responses (file SuppPub1.jpg); all models (including model with interaction effects) along with their confidence intervals and VIF values for each estimate, and Q-Q and scatter plots (file SuppPub2.pdf); the Experience Sampling Method questionnaire with an English translation (file SuppPub3.pdf); the original questionnaire for the person-related and socio-economic predictors in German (file SuppPub4.pdf); and a description of all variables assessed including the English translations for the person-related and socio-economic questionnaire (file SuppPub5.pdf).

2

**** acts as a placeholder for b, bp, bps, bpp, and bpsp.

1.
Abbasi
,
M.
,
Tokhi
,
M. O.
,
Falahati
,
M.
,
Yazdanirad
,
S.
,
Ghaljahi
,
M.
,
Etemadinezhad
,
S.
, and
Jaffari Talaar Poshti
,
R.
(
2021
). “
Effect of personality traits on sensitivity, annoyance and loudness perception of low- and high-frequency noise
,”
J. Low Frequency Noise, Vib. Active Control
40
,
643
655
.
2.
Aletta
,
F.
,
Masullo
,
M.
,
Maffei
,
L.
, and
Kang
,
J.
(
2016
). “
The effect of vision on the perception of the noise produced by a chiller in a common living environment
,”
Noise Cont. Eng. J.
64
,
363
378
.
3.
Aletta
,
F.
,
Oberman
,
T.
,
Axelsson
,
Ö.
,
Xie
,
H.
,
Zhang
,
Y.
,
Lau
,
S.-K.
,
Tang
,
S. K.
,
Jambrošić
,
K.
,
De Coensel
,
B.
,
van den Bosch
,
K. A.-M.
,
Aumond
,
P.
,
Guastavino
,
C.
,
Lavandier
,
C.
,
Fiebig
,
A.
,
Schulte-Fortkamp
,
B.
,
Sarwono
,
A.
,
Astolfi
,
A.
,
Nagahata
,
K.
,
Jeon
,
J.-Y.
,
Jo
,
H.-I.
,
Chieng
,
J.
,
Gan
,
W.-S.
,
Hong
,
J.-Y.
,
Lam
,
B.
,
Ong
,
Z.-T.
,
Kogan
,
P.
,
Silva
,
E. S.
,
Manzano
,
J. V.
,
Yörükočlu
,
P. N. D.
,
Nguyen
,
T. L.
, and
Kang
,
J.
(
2020
). “
Soundscape assessment: Towards a validated translation of perceptual attributes in different languages
,” in
Proceedings of Internoise 2020
,
August 23–26
,
Seoul, Korea
.
4.
Alimohammadi
,
I.
,
Nassiri
,
P.
,
Azkhosh
,
M.
, and
Hoseini
,
M.
(
2010
). “
Factors affecting road traffic noise annoyance among white-collar employees working in Theran
,”
Iran. J. Environ. Health. Sci. Eng.
7
,
25
34
, available at https://ijehse.tums.ac.ir/index.php/jehse/article/view/228.
5.
ANSI
(
2005
). ANSI S1.13-2005,
Measurement of Sound Pressure Levels in Air
(
American National Standards Institute
,
Washington, DC
).
6.
ANSI
(
2007
). ANSI S3.4-2007,
American National Standard Procedure for the Computation of Loudness of Steady Sound
(
American National Standards Institute
,
Washington, DC
).
7.
Asutay
,
E.
, and
Västfjäll
,
D.
(
2012
). “
Perception of loudness is influenced by emotion
,”
PLoS One
7
,
e38660
.
8.
Axelsson
,
Ö.
,
Nilsson
,
M. E.
, and
Berglund
,
B.
(
2010
). “
A principal components model of soundscape perception
,”
J. Acoust. Soc. Am.
128
,
2836
2846
.
9.
Bartels
,
S.
,
Márki
,
F.
, and
Müller
,
U.
(
2015
). “
The influence of acoustical and non-acoustical factors on short-term annoyance due to aircraft noise in the field—The COSMA study
,”
Sci. Total Environ.
538
,
834
843
.
10.
Bates
,
D. M.
,
Mächler
,
M.
,
Bolker
,
B.
, and
Walker
,
S.
(
2015
). “
Fitting linear mixed-effects models using lme4
,”
J. Stat. Softw.
67
,
1
48
.
11.
Beach
,
E. F.
,
Williams
,
W.
, and
Gilliver
,
M.
(
2012
). “
The objective-subjective assessment of noise: Young adults can estimate loudness of events and lifestyle noise
,”
Int. J. Audiol.
51
,
444
449
.
12.
Bech
,
P.
(
1999
). “
Health-related quality of life measurements in the assessment of pain clinic results
,”
Acta Anaesthesiol. Scand.
43
,
893
896
.
13.
Benz
,
S. L.
,
Kuhlmann
,
J.
,
Schreckenberg
,
D.
, and
Wothge
,
J.
(
2021
). “
Contributors to neighbour noise annoyance
,”
Int. J. Environ. Res. Public Health
18
,
8098
.
14.
Bowling
,
A.
(
2005
). “
Just one question: If one question works, why ask several?
,”
J. Epidemiol. Commun. Health
59
,
342
345
.
15.
Bundesministerium für Umwelt, Naturschutz und Reaktorsicherheit
(
1998
). Sechste allgemeine verwaltungsvorschrift zum Bundes-Immissionsschutzgesetz: Technische anleitung zum schutz gegen Lärm–TA Lärm (
Sixth General Administrative Regulation on the Federal Immission Control Act - Technical Instructions for Protection against Noise–TA Lärm
) (
Carl Heymanns Verlag
,
Hürth, Germany
), https://www.gmbl-online.de/download/GMBl-Ausgabe-1998-26.pdf (Last viewed October 6, 2022).
16.
Craig
,
A.
,
Moore
,
D.
, and
Knox
,
D.
(
2017
). “
Experience sampling: Assessing urban soundscapes using in-situ participatory methods
,”
Appl. Acoust.
117
,
227
235
.
17.
Csikszentmihalyi
,
M.
(
2014
).
Flow and the Foundations of Positive Psychology: The Collected Works of Mihaly Csikszentmihalyi
(
Springer Netherlands
,
Dordrecht
).
18.
De Coensel
,
B.
,
Botteldooren
,
D.
, and
Muer
,
T. D.
(
2003
). “
1/f noise in rural and urban soundscapes
,”
Acta Acust. united Ac.
89
(
2
),
287
295
.
19.
DIN
(
1996
). DIN 45645-1,
Determination of Rating Levels from Measurement Data — Part 1; Noise Immission in the Neighbourhood
(
DIN Deutsches Institut für Normung e. V
.,
Berlin, Germany
).
20.
DIN
(
2005
). DIN 45681,
Acoustics – Determination of Tonal Components of Noise and Determination of a Tone Adjustment for the Assessment of Noise Immissions
(
DIN Deutsches Institut für Normung e. V
.,
Berlin, Germany
).
21.
DIN
(
2009
). DIN 1320,
Acoustics—Terminology
(
DIN Deutsches Institut für Normung e. V
.,
Berlin, Germany
).
22.
DIN
(
2010
). DIN 45631/A1,
Calculation of Loudness Level and Loudness from the Sound Spectrum: Zwicker Method – Amendment 1: Calculation of the Loudness of Time-Variant Sound; with CD-ROM
(
DIN Deutsches Institut für Normung e. V
.,
Berlin, Germany
).
23.
DIN
(
2012
). DIN 45645-2,
Determination of Rating Levels from Measurement Data – Part 2: Determination of the Noise Rating Level for Occupational Activities at the Work Place for the Level Range underneath the Given Risk of Hearing Damage
(
DIN Deutsches Institut für Normung e. V
.,
Berlin, Germany
).
24.
DIN
(
2014
). DIN EN 61672-1,
Electroacoustics – Sound Level Meters – Part 1: Specifications
(
DIN Deutsches Institut für Normung e. V
.,
Berlin, Germany
).
25.
DIN
(
2020
). DIN 45680,
Measurement and Assessment of Low-Frequency Noise Immissions
(
DIN Deutsches Institut für Normung e. V
.,
Berlin, Germany
).
26.
Eikmann
,
T.
,
Nieden
,
A. z.
,
Ziedorn
,
D.
,
Römer
,
K.
,
Lengler
,
A.
,
Harpel
,
S.
,
Bürger
,
M.
,
Pons-Kühnemann
,
J.
,
Hudel
,
H.
, and
Spilski
,
J.
(
2015
). “Blood pressure monitoring,” Vol. 5, Final report of NORAH: Research programm on Noise-Related Annoyance, Cognition and Health: A Transportation Noise Effects Monitoring Program in Germany.
27.
European Commission
(
2000
). “
Position paper on EU noise indicators: A report produced for the european commission
,” in
Environment Themes Urban
(
Office for Official Publications of the European Communities
,
Luxembourg
).
28.
Fastl
,
H.
(
2001
). “
Neutralizing the meaning of sound for sound quality evaluations
,” in
Proceedings of 20th International Congress on Acoustics, ICA 2001
,
August 23–27
,
Sydney, Australia
.
29.
Fastl
,
H.
, and
Florentine
,
M.
(
2011
). “
Loudness in daily environments
,” in
Springer Handbook of Auditory Research: Loudness
edited by
M.
Florentine
,
A. N.
Popper
, and
R. R.
Fay
(
Scholars Portal
,
New York
), pp.
199
221
.
30.
Fiebig
,
A.
, and
Sottek
,
R.
(
2015
). “
Contribution of peak events to overall loudness
,”
Acta Acust. united Ac.
101
,
1116
1129
.
31.
Fox
,
J.
, and
Weisberg
,
S.
(
2019
).
An R Companion to Applied Regression
,
3rd ed
. (
Sage
,
Thousand Oaks, CA
).
32.
Guski
,
R.
(
1999
). “
Personal and social variables as co-determinants of noise annoyance
,”
Noise Health.
1
(
3
),
45
56
.
33.
Halin
,
N.
(
2016
). “
Distracted while reading? Changing to a hard-to-read font shields against the effects of environmental noise and speech on text memory
,”
Front. Psychol.
7
,
1196
.
34.
Hart
,
S. G.
(
2006
). “
NASA-Task Load Index (NASA-TLX); 20 years later
,”
Proc. Human Factors Ergonom. Soc. Ann. Meet.
50
,
904
908
.
35.
Hasegawa
,
Y.
, and
Lau
,
S.-K.
(
2021
). “
Audiovisual bimodal and interactive effects for soundscape design of the indoor environments: A systematic review
,”
Sustainability
13
,
339
.
36.
Hatfield
,
J.
,
Job
,
R. F. S.
,
Hede
,
A. J.
,
Carter
,
N. L.
,
Peploe
,
P.
,
Taylor
,
R.
, and
Morrell
,
S.
(
2002
). “
Human response to environmental noise: The role of perceived control
,”
Int. J. Behav. Med.
9
,
341
359
.
37.
Hellbrück
,
J.
,
Fastl
,
H.
, and
Keller
,
B.
(
2002
). “
Effects of meaning of sound on loudness judgements
,” in
Proceedings of Forum Acusticum
,
September 16–20
,
Sevilla, Spain
.
38.
Heller
,
O.
(
1990
). “
Scaling and orientation
,” in
Fechner Day 90. Proceedings of the Sixth Annual Meeting of the International Society for Psychophysics
, edited by
F.
Müller
(
Würzburg University
,
Würzburg, Germany
), pp.
52
57
.
39.
Hinz
,
A.
,
Daig
,
I.
,
Petrowski
,
K.
, and
Brähler
,
E.
(
2012
). “
Die Stimmung in der deutschen bevölkerung: Referenzwerte für den mehrdimensionalen befindlichkeitsfragebogen MDBF” (“Mood in the German population: Norms of the multidimensional mood questionnaire MDBF”)
,
Psychother. Psych. Med.
62
,
52
57
.
40.
Idler
,
E. L.
, and
Benyamini
,
Y.
(
1997
). “
Self-rated health and mortality: A review of twenty-seven community studies
,”
J. Health Social Behav.
38
,
21
37
.
41.
ISO
(
2003
). ISO 1996-1(E), “
Acoustics - Description, measurement and assessment of environmental noise: Part 1: Basic quantities and assessment procedures
” (
International Organization for Standardization
,
Geneva, Switzerland
).
42.
ISO
(
2007
). DIN ISO 16832, “Acoustics – Loudness scaling by means of categories” (
International Organization for Standardization
,
Geneva, Switzerland
).
43.
ISO
(
2009
). DIN ISO 326-1, “Measurement of noise emitted by accelerating road vehicles – Engineering method—Part 1: M and N categories” (
International Organization for Standardization
,
Geneva, Switzerland
).
44.
ISO
(
2010
). DIN EN ISO 1102, “Acoustics – Noise emitted by machinery and equipment – Determination of emission sound pressure levels at a work station and at other specified positions applying approximate environmental corrections” (
International Organization for Standardization
,
Geneva, Switzerland
).
45.
ISO
(
2017a
). DIN ISO 532-1, “Acoustics – Methods for calculating loudness—Part 1: Zwicker method” (
International Organization for Standardization
,
Geneva, Switzerland
).
46.
ISO
(
2017b
). ISO 532-2, “Acoustics – Methods for calculating loudness—Part 2: Moore-Glasberg method” (
International Organization for Standardization
,
Geneva, Switzerland
).
47.
ISO
(
2017c
). ISO 1996-2, “Acoustics – Description, measurement and assessment of environmental noise: Part 2: Determination of sound pressure levels” (
International Organization for Standardization
,
Geneva, Switzerland
).
48.
ISO
(
2021
). DIN ISO/TS 12913-3, “Acoustics - Soundscape: Part 3: Data analysis” (
International Organization for Standardization
,
Geneva, Switzerland
).
49.
ISO
(
2022
). ISO/DIS 532-3(E), “Methods for calculating loudness — Part 3: Moore-Glasberg-Schlittenlacher method” (
International Organization for Standardization
,
Geneva, Switzerland
).
50.
ITU
(
2015
). Rec. ITU BS.1770-4,
Algorithms to Measure Audio Programme Loudness and True-Peak Audio Level
(
International Telecommunication Union
,
Geneva, CH
).
51.
Job
,
R. F. S.
(
1999
). “
Noise sensitivity as a factor influencing human reaction to noise
,”
Noise Health.
1
(
3
),
57
68
.
52.
Kroenke
,
K.
,
Spitzer
,
R. L.
,
Williams
,
J. B. W.
,
Monahan
,
P. O.
, and
Löwe
,
B.
(
2007
). “
Anxiety disorders in primary care: Prevalence, impairment, comorbidity, and detection
,”
Ann. Intern. Med.
146
,
317
325
.
53.
Kroesen
,
M.
,
Molin
,
E. J. E.
, and
van Wee
,
B.
(
2008
). “
Testing a theory of aircraft noise annoyance: A structural equation analysis
,”
J. Acoust. Soc. Am.
123
,
4250
4260
.
54.
Kulzer
,
B.
,
Hermanns
,
N.
,
Kubiak
,
T.
,
Krichbaum
,
M.
, and
Haak
,
T.
(
2006
). “
Der WH0 5: Ein geeignetes Instrument zur Messung des Wohlbefindens und zum Depressionsscreening bei Diabetikern” (“The WH0 5: A suitable instrument for measuring well-being and screening for depression in diabetic patients”)
,
Diabetol. Stoffwechsel
1
,
A97
.
55.
Kuwano
,
S.
,
Hatoh
,
T.
,
Kato
,
T.
, and
Namba
,
S.
(
2013
). “
Evaluation of the loudness of stationary and non-stationary complex sounds
,” in
Proceedings of ICA 2013
,
November 23
,
Brussels, Belgium
.
57.
Kuznetsova
,
A.
,
Brockhoff
,
P. B.
, and
Christensen
,
R. H. B.
(
2017
). “
lmerTest Package: Tests in linear mixed effects models
,”
J. Stat. Softw.
82
,
1
26
.
56.
Larson
,
R.
, and
Csikszentmihalyi
,
M.
(
2014
). “
The experience sampling method
,” in
Flow and the Foundations of Positive Psychology: The Collected Works of Mihaly Csikszentmihalyi
, edited by
M.
Csikszentmihalyi
(
Springer Netherlands
,
Dordrecht
), pp.
21
34
.
57.
Meunier
,
S.
,
Marchioni
,
A.
, and
Rabau
,
G.
(
2000
). “
Subjective evaluation of loudness models using synthesized and environmental sounds
,” in
Proceedings of Internoise 2000
,
August 27–30
,
Paris, France
.
58.
Michaud
,
D. S.
,
Keith
,
S. E.
,
Feder
,
K.
,
Voicescu
,
S. A.
,
Marro
,
L.
,
Than
,
J.
,
Guay
,
M.
,
Bower
,
T.
,
Denning
,
A.
,
Lavigne
,
E.
,
Whelan
,
C.
,
Janssen
,
S. A.
,
Leroux
,
T.
, and
van den Berg
,
F.
(
2016
). “
Personal and situational variables associated with wind turbine noise annoyance
,”
J. Acoust. Soc. Am.
139
,
1455
1466
.
59.
Miedema
,
H. M. E.
, and
Vos
,
H.
(
2003
). “
Noise sensitivity and reactions to noise and other environmental conditions
,”
J. Acoust. Soc. Am.
113
,
1492
1504
.
60.
Moghadam
,
S. M. K.
,
Alimohammadi
,
I.
,
Taheri
,
E.
,
Rahimi
,
J.
,
Bostanpira
,
F.
,
Rahmani
,
N.
,
Abedi
,
K.-D.
, and
Ebrahimi
,
H.
(
2021
). “
Modeling effect of five big personality traits on noise sensitivity and annoyance
,”
Appl. Acoust.
172
,
107655
.
61.
Moore
,
B. C. J.
, and
Glasberg
,
B. R.
(
1996
). “
A revision of Zwicker's loudness model
,”
Acta Acust. united Ac.
82
(
2
),
335
345
.
62.
Moore
,
B. C. J.
, and
Glasberg
,
B. R.
(
2007
). “
Modeling binaural loudness
,”
J. Acoust. Soc. Am.
121
,
1604
1612
.
63.
Moore
,
B. C. J.
,
Glasberg
,
B. R.
,
Varathanathan
,
A.
, and
Schlittenlacher
,
J.
(
2016
). “
A loudness model for time-varying sounds incorporating binaural inhibition
,”
Trends Hear.
20
,
233121651668269
.
64.
movisens GmbH, Karlsruhe, Germany
(
2020
). “
movisensXS
,” https://www.movisens.com/en/products/movisensXS/ (Last viewed October 6, 2022).
65.
Murphy
,
E.
, and
King
,
E. A.
(
2016
). “
Smartphone-based noise mapping: Integrating sound level meter app data into the strategic noise mapping process
,”
Sci. Total Environ.
562
,
852
859
.
66.
Nakagawa
,
S.
,
Johnson
,
P. C. D.
, and
Schielzeth
,
H.
(
2017
). “
The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded
,”
J. R Soc. Interface
14
,
20170213
.
67.
Pennig
,
S.
, and
Schady
,
A.
(
2014
). “
Railway noise annoyance: Exposure-response relationships and testing a theoretical model by structural equation analysis
,”
Noise Health
16
,
388
399
.
68.
Picaut
,
J.
,
Fortin
,
N.
,
Bocher
,
E.
,
Petit
,
G.
,
Aumond
,
P.
, and
Guillaume
,
G.
(
2019
). “
An open-science crowdsourcing approach for producing community noise maps using smartphones
,”
Build. Environ.
148
,
20
33
.
69.
Posner
,
J.
,
Russell
,
J. A.
, and
Peterson
,
B. S.
(
2005
). “
The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology
,”
Develop. Psychopathol.
17
,
715
734
.
70.
Radicchi
,
A.
(
2019
). “
An experimental soundscape study, combining binaural recordings, in situ questionnaires and behavioral mapping
,”
J. Acoust. Soc. Am.
145
,
1753
.
71.
Radun
,
J.
,
Hongisto
,
V.
, and
Suokas
,
M.
(
2019
). “
Variables associated with wind turbine noise annoyance and sleep disturbance
,”
Build. Environ.
150
,
339
348
.
72.
R Core Team
(
2022
). “
R: A language and environment for statistical computing
,” https://cran.r-project.org/bin/windows/base/R-4.2.0-win.exe (Last viewed February 29, 2020).
73.
Rennies
,
J.
,
Verhey
,
J. L.
,
Appell
,
J. E.
, and
Kollmeier
,
B.
(
2013
). “
Loudness of complex time-varying sounds? A challenge for current loudness models
,”
Proc. Mtgs. Acoust.
19
,
50189
.
74.
Rennies
,
J.
,
Verhey
,
J. L.
, and
Fastl
,
H.
(
2010
). “
Comparison of loudness models for time-varying sounds
,”
Acta Acust. united Ac.
96
,
383
396
.
75.
Rothman
,
K.
(
1990
). “
No adjustments are needed for multiple comparisons
,”
Epidemiology
1
,
43
46
.
76.
RStudio Team
(
2022
). “
RStudio: Integrated development for R
,” https://rstudio.com/products/rstudio/ (Last viewed June 22, 2022).
77.
Schlittenlacher
,
J.
,
Hashimoto
,
T.
,
Kuwano
,
S.
, and
Namba
,
S.
(
2017
). “
Overall judgment of loudness of time-varying sounds
,”
J. Acoust. Soc. Am.
142
,
1841
1847
.
78.
Schlittenlacher
,
J.
,
Turner
,
R. E.
, and
Moore
,
B. C. J.
(
2020
). “
Development of a deep neural network for speeding up a model of loudness for time-varying sounds
,”
Trends Hear.
24
,
233121652094307
.
79.
Schreckenberg
,
D.
,
Belke
,
C.
, and
Spilski
,
J.
(
2018
). “
The development of a multiple-item annoyance scale (MIAS) for transportation noise annoyance
,”
Int. J. Environ. Res. Public Health
15
,
971
.
80.
Schütte
,
M.
,
Marks
,
A.
,
Wenning
,
E.
, and
Griefahn
,
B.
(
2007
). “
The development of the noise sensitivity questionnaire
,”
Noise Health
9
,
15
24
.
81.
Siegel
,
E. H.
, and
Stefanucci
,
J. K.
(
2011
). “
A little bit louder now: Negative affect increases perceived loudness
,”
Emotion
11
,
1006
1011
.
82.
Sörqvist
,
P.
,
Dahlström
,
Ö.
,
Karlsson
,
T.
, and
Rönnberg
,
J.
(
2016
). “
Concentration: The neural underpinnings of how cognitive load shields against distraction
,”
Front. Hum. Neurosci.
10
,
221
.
83.
Spilski
,
J.
,
Bergström
,
K.
,
Möhler
,
U.
,
Lachmann
,
T.
, and
Klatte
,
M.
(
2019
). “
Do we need different aircraft noise metrics to predict annoyance for different groups of people?
,” in
Proceedings of 23th International Congress on Acoustics, ICA 2019
,
September 9–13
,
Aachen, Germany
.
84.
Stallen
,
P. J. M.
,
Campbell
,
T. A.
,
Dubois
,
D.
,
Fastl
,
H.
,
Andringa
,
T. C.
, and
Stallen
,
P. J.
(
2008
). “
When exposed to environmental sounds, would perceived loudness not be affected by social context?
,”
J. Acoust. Soc. Am.
123
,
3690
.
85.
Steffens
,
J.
,
Müller
,
F.
,
Schulz
,
M.
, and
Gibson
,
S.
(
2020
). “
The effect of inattention and cognitive load on unpleasantness judgments of environmental sounds
,”
Appl. Acoust.
164
,
107278
.
86.
Steffens
,
J.
,
Steele
,
D.
, and
Guastavino
,
C.
(
2015
). “
New insights into soundscape evaluations using the experience sampling method
,” in
Proceedings of the Euronoise 2015
,
May 31–June 3
,
Maastricht, Germany
, pp.
1495
1500
.
87.
Steffens
,
J.
,
Steele
,
D.
, and
Guastavino
,
C.
(
2017
). “
Situational and person-related factors influencing momentary and retrospective soundscape evaluations in day-to-day life
,”
J. Acoust. Soc. Am.
141
,
1414
1425
.
88.
Steffens
,
J.
,
Müller
,
F.
,
Schulz
,
M.
, and
Gibson
,
S.
(
2019
). “
Cognitive load influences the evaluation of complex acoustical scenarios
,” in
Proceedings of 23th International Congress on Acoustics, ICA 2019
,
September 9–13
,
Aachen, Germany
.
89.
Steyer
,
R.
(
1997
). “
MDMQ questionnaire (English version of MDBF)
,” https://www.metheval.uni-jena.de/mdbf.php (Last viewed August 8, 2022).
90.
Steyer
,
R.
,
Schwenkmezger
,
P.
,
Notz
,
P.
, and
Eid
,
M.
(
1997a
). “
Testtheoretische Analysen des Mehrdimensionalen Befindlichkeitsfragebogen (MDBF)” [“Theoretical analyses of the multidimensional mood questionnaire (MDBF)”]
,
Diagnostica
40
(
4
),
320
328
.
91.
Sun
,
K.
,
Coensel
,
B. de
,
Echevarria Sanchez
,
G. M.
,
van Renterghem
,
T.
, and
Botteldooren
,
D.
(
2018
). “
Effect of interaction between attention focusing capability and visual factors on road traffic noise annoyance
,”
Appl. Acoust.
134
,
16
.
92.
Sung
,
J. H.
,
Lee
,
J.
,
Jeong
,
K. S.
,
Lee
,
S.
,
Lee
,
C.
,
Jo
,
M.-W.
, and
Sim
,
C. S.
(
2017
). “
Influence of transportation noise and noise sensitivity on annoyance: A cross-sectional study in South Korea
,”
Int. J. Environ. Res. Public Health
14
,
322
.
93.
Swift
,
S. H.
, and
Gee
,
K. L.
(
2020
). “
Techniques for the rapid calculation of the excitation pattern in the time varying extensions to ANSI S3.4-2007
,”
Proc. Mtgs. Acoust.
36
,
040002
.
94.
The jamovi project
(
2022
). “
jamovi
,” https://www.jamovi.org/download.html (Last viewed June 22, 2022).
95.
Topp
,
C. W.
,
Østergaard
,
S. D.
,
Søndergaard
,
S.
, and
Bech
,
P.
(
2015
). “
The WHO-5 Well-Being Index: A systematic review of the literature
,”
Psychother. Psychosom.
84
,
167
176
.
96.
Torresin
,
S.
,
Albatici
,
R.
,
Aletta
,
F.
,
Babich
,
K.
, and
Kang
,
J.
(
2019
). “
Assessment methods and factors determining positive indoor soundscapes in residential buildings: A systematic review
,”
Sustainability
11
,
5290
.
97.
Torresin
,
S.
,
Albatici
,
R.
,
Aletta
,
F.
,
Babich
,
F.
,
Oberman
,
T.
,
Siboni
,
S.
, and
Kang
,
J.
(
2020
). “
Indoor soundscape assessment: A principal components model of acoustic perception in residential buildings
,”
Build. Environ.
182
,
107152
.
98.
Torresin
,
S.
,
Albatici
,
R.
,
Aletta
,
F.
,
Babich
,
F.
,
Oberman
,
T.
,
Stawinoga
,
A. E.
, and
Kang
,
J.
(
2022
). “
Indoor soundscapes at home during the COVID-19 lockdown in London—Part II: A structural equation model for comfort, content, and well-being
,”
Appl. Acoust.
185
,
108379
.
99.
Västfjäll
,
D.
(
2002
). “
Influences of current mood and noise sensitivity on judgments of noise annoyance
,”
J. Psychol.
136
,
357
370
.
100.
Ventura
,
R.
,
Mallet
,
V.
, and
Issarny
,
V.
(
2018
). “
Assimilation of mobile phone measurements for noise mapping of a neighborhood
,”
J. Acoust. Soc. Am.
144
,
1279
1292
.
101.
Ventura
,
R.
,
Mallet
,
V.
,
Issarny
,
V.
,
Raverdy
,
P.-G.
, and
Rebhi
,
F.
(
2017
). “
Evaluation and calibration of mobile phones for noise monitoring application
,”
J. Acoust. Soc. Am.
142
,
3084
3093
.
102.
Versümer
,
S.
,
Steffens
,
J.
,
Blättermann
,
P.
, and
Becker-Schweitzer
,
J.
(
2020
). “
Modeling evaluations of low-level sounds in everyday situations using linear machine learning for variable selection
,”
Front. Psychol.
11
,
570761
.
103.
Versümer
,
S.
,
Steffens
,
J.
, and
Rosenthal
,
F.
(
2023
). “
Extensive crowdsourced dataset of in-situ evaluated binaural soundscapes of private dwellings containing subjective sound-related and situational ratings along with person factors to study time-varying influences on sound perception — research data
,” (V.01.1) [Data set].
Zenodo
. https://10.5281/zenodo.7858848.
104.
Ward
,
D.
,
Athwal
,
C.
, and
Kokuer
,
M.
(
2013
). “
An efficient time-varying loudness model
,” in
Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2013)
,
October 20–23
,
New Paltz, New York
, pp.
1
4
.
105.
Ward
,
D.
,
Enderby
,
S.
,
Athwal
,
C.
, and
Reiss
,
J. D.
(
2015
). “
Real-time excitation based binaural loudness meters
,” in
Proceedings of the International Conference on DAFX
,
November 30–December 3
,
Trondheim, Norway
.
106.
Winter
,
B.
(
2013
), “
Linear models and linear mixed effects models in R with linguistic applications
,” https://arxiv.org/abs/1308.5499v1 (Last viewed January 5, 2023).
111.
Zuur
,
A. F.
,
Ieno
,
E. N.
, and
Elphick
,
C. S.
(
2010
). “
A protocol for data exploration to avoid common statistical problems
,”
Methods Ecol. Evol.
1
,
3
14
.

Supplementary Material