This study investigates loudness perception in real-world contexts using predictors related to the sound, situation, or person. In the study, 105 participants recorded 6594 sound environments in their homes, which were then evaluated based on the Experience Sampling Method. Hierarchical linear regressions using a loudness level based on ISO 532-1 allowed for obtaining the best model fits for predicting perceived loudness and explaining the highest variance. LAeq and LAF5 provided comparable results and may require less computational effort. However, the analysis shows that only one-third of the variance explained by fixed effects was attributable to the loudness level. Sixteen percent stemmed from perceived properties of the soundscape; 1% were attributable to relatively temporally stable, person-related predictors like participants' age; non-auditory situational predictors made no additional contribution. The results thus did not confirm previous findings on loudness perception under laboratory conditions, emphasizing the importance of the situational context. Along with the current paper, a comprehensive dataset, including the assessed person-related, situational, and sound-related measures as well as LAeq time-series and third-octave spectrograms, is provided to enable further research on sound perception, indoor soundscapes, and emotion.
I. INTRODUCTION
A. Loudness predictions
Perceived loudness, understood as the magnitude of the auditory sensation a listener experiences when exposed to sound, is a leading aspect in noise research seeking to maintain and promote people's health and living environments' quality (De Coensel , 2003). In standardization and noise control, aggregated simple acoustic measures are used as loudness approximations that can be gathered and calculated time- and cost-effectively, such as the energetically averaged equivalent continuous sound pressure level (see DIN 1320:2009, DIN, 2009; European Commission, 2000; Stallen , 2008). Various frequency and time weightings are applied to emulate human auditory processing. Some of these weightings have been developed for specific signal types and different sound pressure levels, which limits their application when sound levels or sound types change significantly during the observed period, such as an aircraft flyover or a passing truck in an otherwise quiet residential area (DIN, 2014; ISO, 2010; ISO, 2003; ISO, 2009; ITU, 2015). Therefore, penalties are applied to further optimize these loudness predictions. These account for signal characteristics (e.g., strong tonal components), the type of sound (e.g., the kind of traffic or the presence of predominant low-frequency components on the one hand), and context (e.g., the time of the day on the other hand) (ISO, 2003; Bundesministerium für Umwelt, Naturschutz und Reaktorsicherheit, 1998; DIN, 2020; DIN, 2005; ANSI, 2005; DIN, 1996; DIN, 2012).
Parallel to that mentioned previously, decades of research based on psychoacoustic experiments has developed several complex loudness prediction models to mimic the complex human auditory system (e.g., the outer ear canal, frequency-place transformations of the basilar membrane, critical bands, and masking effects). In addition, these account for signal properties like the sound level. For example, ISO 532-1 (ISO, 2017a), a refined version of DIN 45631/A1 (DIN, 2010), incorporates the method developed by Zwicker, providing loudness predictions for both stationary and time-varying sounds using third-octave filtering. Another standardized method for stationary sounds, derived by Moore (2016) makes further refinements. For example, it uses the ERBN scale (i.e., equivalent rectangular bandwidth of the auditory filter) and a filter bank of higher resolution for representing the middle ear and calculating the excitation pattern on the basilar membrane (S3.4-2007 from ANSI, 2007; Moore and Glasberg, 1996), and binaural inhibition (ISO 532-2 from ISO, 2017b; Moore and Glasberg, 2007). The recent standard ISO 532-3 (ISO, 2022) also applies that method for time-varying sounds (ISO, 2017a; Moore , 2016). Because higher computational costs accompany Moore's improvements, some studies have offered suggestions to hasten calculations (Ward , 2013; Ward , 2015; Swift and Gee, 2020; Schlittenlacher , 2020). However, these have not been part of the implemented standards until now.
B. Laboratory vs field studies
All of the previously mentioned loudness predictions share recent development and validation based on laboratory experiments with synthetic signals of durations of only a few seconds (e.g., Moore and Glasberg, 2007; Rennies , 2010; Fiebig and Sottek, 2015; ISO, 2017a) or everyday sounds taken out of context (ISO, 2017a; Rennies , 2013). Furthermore, this approach resulted in high agreement with participants' loudness ratings (Meunier , 2000).
Increasingly, in addition to laboratory experiments, various field studies on noise distribution and perception have aimed to increase their results' ecological validity. The acoustic environment where the participants reside is either approximated using noise maps or determined using stationary microphones installed outdoors or indoors, dosimeters (worn by participants during the day), or smartphones with built-in or external microphones. Some of the studies examine noise distribution in urban areas (e.g., Murphy and King, 2016; Ventura , 2018; Radicchi, 2019), while others investigate the annoyance and pleasantness of urban soundscapes in general (Craig , 2017; Steffens , 2017; Picaut , 2019)—or, of particular relevance, aircraft noise (Bartels , 2015) or perceived loudness (Beach , 2012). For example, in a field study by Beach (2012), participants assessed perceived loudness during 48 h of their daily activities using real-time audio recordings. As a result, the correlations calculated for each participant between perceived and acoustical loudness predictors had a mean of r = 0.56. Thus, the predictive power observed in laboratory studies decreases when real-world context, personal characteristics, and socio-economic factors come into play in field studies. These findings suggest that situation and person play pivotal roles in individual perception.
In field studies, participants' evaluations are typically collected using questionnaires, interviews, diaries by pen and paper, or survey apps on smartphones. Even if some efforts are necessary with smartphones to avoid errors due to high self-noise (i.e., a small signal-to-noise ratio for quiet soundscapes), wind noise, and uncalibrated recordings (Picaut , 2019; Ventura , 2017), they offer crucial advantages. For example, smartphone usage allows participants to measure acoustic characteristics and capture perceptual ratings of sound or contextual properties on the same device.
C. Context and non-auditory influencing factors
Research has revealed several indications of non-auditory contextual factors in human sound perception, both related to annoyance and perceived pleasantness (Bartels , 2015; Spilski , 2019), but also concerning loudness perception in particular (Fastl and Florentine, 2011; Stallen , 2008). Guski's (1999) theoretical model for noise annoyance quantifies the multitude of influences on sound perception by suggesting that sound characteristics can explain only one-third of the variance in noise annoyance ratings. Another third of this variance may result from non-auditory factors—individual, personal, situational, or social. Additional studies reveal an even lower impact of the sound itself. For example, Michaud (2016) and Bartels (2015) identified 9% and 14% of the variance explained due to sound, respectively; the latter additionally found that 14% of the variance was due to the context of the situation (Bartels , 2015). Person-related and socio-economic predictors, which tend to be relatively stable over time, seem to contribute little to annoyance or lead to contradictory results (see Torresin , 2019, for a review). This might indicate their minor importance but could suggest that the relevant person-related predictors have not yet been found.
While influencing factors on annoyance have been intensively investigated (e.g., Alimohammadi , 2010; Bartels , 2015; Michaud , 2016; Steffens , 2020; Versümer , 2020; Benz , 2021; Hasegawa and Lau, 2021; Moghadam , 2021), only a few studies revealed non-auditory time-varying effects on loudness perception. First, the listener's emotional state appears prominent in loudness perception—that is, a more positive affective state is associated with lower loudness perception (Siegel and Stefanucci, 2011; Asutay and Västfjäll, 2012) and higher perceived pleasantness (Steffens , 2017; Torresin , 2019; Västfjäll, 2002). Second, high concentration on only one cognitive task may shield against acoustic distractions (Sörqvist , 2016; Halin, 2016), which could explain why people performing a cognitive task perceive sounds less loudly (Aletta , 2016). This accords with findings from a laboratory study in which subjects rated environmental sounds as 7% less loud and 6% more pleasant during high (compared to low) cognitive load (Steffens , 2019).
Beyond the effect of state affect and cognitive load, exploring the effects of other possible influencing predictors often addressed in annoyance research may be helpful, e.g., noise sensitivity and control over the situation or the sound source (Torresin , 2022; Sun , 2018; Sung , 2017; Pennig and Schady, 2014; Schreckenberg , 2018; Kroesen , 2008). Additionally, the question of which predictor contributes how much to the variance explained in annoyance and loudness perception remains unresolved.
D. Pending research needs
Based on the state of the research described previously, this study addressed the following research questions:
RQ1. Auditory predictors
Do complex auditory loudness models outperform simple acoustical loudness measures (without penalties) in real-world scenarios with the original context of the sound environment?
RQ2. Influence of three domains of predictors
What is the influence of the three domains, the sound field (1), the non-auditory time-varying effects that change rather quickly from situation to situation (2), and the non-auditory, relatively temporally stable person-related and socio-economic effects (3), on the perceived loudness?
RQ3. Influence of individual predictors
What is the influence of each predictor within these three domains on human loudness perception in real-world scenarios?
A field study based on the Experience Sampling Method was conducted in participants' dwellings to answer these research questions and overcome some of the challenges mentioned before. First, using specially designed, binaural audio recording devices with low self-noise and a survey app on a smartphone, participants reported on their acoustic environment and the situation at hand multiple times a day. Then, based on the gathered data, models were developed that predict perceived loudness based on different acoustic measures and perceptual ratings of the sound, situational aspects, and personal characteristics.
II. METHOD
A. Participants
Table I depicts the characteristics of the participants in this study, set during the summer of 2021. Newspaper articles, social media posts, local radio and television broadcasts, and friends and acquaintances facilitated contact and recruitment of participants. The exclusion criteria for participants were if they planned to be away from home for more than two days during the 10-d participation period, could not report five times a day, or used hearing aids. Two participants dropped out for these reasons. In addition, one participant's results were excluded because the records were not linkable to the assessments (due to unsystematic time differences of more than 60 min).
Sample description regarding socio-demographic, socio-economic, and potential hearing impairments.
. | Frequency . | Age in years (rounded) . | |||
---|---|---|---|---|---|
. | Absolute . | Relative . | M . | Standard deviation (SD) . | |
Participants | 105 | 100% | 36 | 14 | |
… women | 57 | 54% | 35 | 14 | |
… men | 48 | 46% | 37 | 15 | |
… non-binary | 0 | 0% | |||
… living alone | 29 | 28% | 37 | 14 | |
… living with others | 76 | 72% | 36 | 14 | |
… living with children | 16 | 15% | 39 | 13 | |
… living without children | 89 | 85% | 35 | 14 | |
… having neighborsa | 99 | 94% | 100% | 36 | 14 |
… next door | 82 | 78% | 83% | 36 | 14 |
… below | 59 | 56% | 60% | 34 | 12 |
… above the participant's dwelling | 52 | 50% | 53% | 37 | 13 |
… having hearing impairmentsb | |||||
… none | 63 | 60% | 31 | 11 | |
… mild | 31 | 30% | 40 | 13 | |
… moderate | 11 | 10% | 54 | 14 | |
… living in a household of | |||||
… 1 person | 29 | 27% | 37 | 14 | |
… 2 persons | 45 | 43% | 38 | 14 | |
… 3 persons | 21 | 20% | 30 | 13 | |
… 4 persons | 8 | 8% | 33 | 14 | |
… 5 persons | 2 | 2% | 53 | 1 |
. | Frequency . | Age in years (rounded) . | |||
---|---|---|---|---|---|
. | Absolute . | Relative . | M . | Standard deviation (SD) . | |
Participants | 105 | 100% | 36 | 14 | |
… women | 57 | 54% | 35 | 14 | |
… men | 48 | 46% | 37 | 15 | |
… non-binary | 0 | 0% | |||
… living alone | 29 | 28% | 37 | 14 | |
… living with others | 76 | 72% | 36 | 14 | |
… living with children | 16 | 15% | 39 | 13 | |
… living without children | 89 | 85% | 35 | 14 | |
… having neighborsa | 99 | 94% | 100% | 36 | 14 |
… next door | 82 | 78% | 83% | 36 | 14 |
… below | 59 | 56% | 60% | 34 | 12 |
… above the participant's dwelling | 52 | 50% | 53% | 37 | 13 |
… having hearing impairmentsb | |||||
… none | 63 | 60% | 31 | 11 | |
… mild | 31 | 30% | 40 | 13 | |
… moderate | 11 | 10% | 54 | 14 | |
… living in a household of | |||||
… 1 person | 29 | 27% | 37 | 14 | |
… 2 persons | 45 | 43% | 38 | 14 | |
… 3 persons | 21 | 20% | 30 | 13 | |
… 4 persons | 8 | 8% | 33 | 14 | |
… 5 persons | 2 | 2% | 53 | 1 |
Multiple responses were possible.
For a description of the definition used for the hearing impairment, see Sec. II B 4.
B. Design and questionnaires
The experimental field study was based on the Experience Sampling Method, previously developed to study what people do, feel, and think during daily activity (Larson and Csikszentmihalyi, 2014). Regarding soundscapes, participants periodically perform momentary judgments of the acoustic environment, the surrounding situation, and their emotional state throughout the day while naturally acting in their everyday environment (Steffens , 2015).
For each participant, the study lasted ten consecutive days. Participants were asked to record and assess their acoustic environment at their homes on an hourly basis to achieve the target of 70 assessments. Because participants were required to submit multiple reports during it, the study used a mixed within- and between-subjects design. There was no manipulation or intervention to interfere with the participants' everyday lives as little as possible.
The predictors assessed stemmed from three domains. First, the sound-related domain includes acoustic predictors (Sec. II B 1) and perceptual predictors (Sec. II B 2) derived through participant judgments. Second, the situational domain of non-auditory time-varying predictors (Sec. II B 3) for characterization of the framing situation. It includes affective measures (also person-related), whose values vary considerably from one situation to another. Third, the person-related domain comprises non-auditory, relatively temporally stable predictors (Sec. II B 4) that are tied to the person, such as age, noise sensitivity, or socio-economic predictors.
The acoustic predictors were calculated from binaural recordings (Sec. II C), while perceptual and situational predictors were assessed using a smartphone survey app (Sec. II C). In addition, this study adapted standardized and established assessments (single-choice, analogue sliders, and Likert scales, non-randomized) for frequent use on small smartphone displays. Next, the person-related and socio-economic predictors were assessed using a tablet-and-pen questionnaire with closed questions. Finally, the multi-item questionnaires' Likert scales (non-randomized) were each presented in a matrix; summation or averaging of the values of interrelated items yielded the predictors' values.
All German questionnaires are available with a translation to English in the supplementary material,1 taken from the published dataset (Versümer , 2023).
1. Sound-related acoustic predictors
First, the binaural recordings were calibrated and filtered (see Sec. II C for more details) to calculate the three acoustic predictors (Table II) used in the statistical analysis (Sec. II E). Second, based on the ISO 532-1 (ISO, 2017a) algorithm for time-varying sounds, the instantaneous loudness was calculated for each channel of each recording assuming diffuse-field conditions. Then, the loudness level LLz(P) was calculated, as Kuwano (2013) suggested. This procedure aligns with work by Schlittenlacher (2017), where the LLz(P) showed higher correlations with overall loudness ratings of binaural recordings of real sound environments than the N5 percent exceedance level. Finally, from the two loudness levels derived from the binaural recordings, the highest value was chosen as a single value representing the whole recording, following ISO 12913-3 (ISO, 2021). This value serves as the Predicted loudness level (named LPL in this study) for all analyses presented; for comparison, the LAeq and LAF5 (see ISO 1996-1:2003; ISO, 2003) were also calculated. Again, the highest value of both binaural channels was used.
Sound-related acoustic or perceptual predictors and their ranges or factor levels. Reference level (in bold) of Salient source ownership: Other. For question texts, scale types, and original German texts, see the supplementary material (Ref. 1) taken from the published dataset (Versümer , 2023).
. | Variable name . | Range or levels . |
---|---|---|
Sound | ||
Dependent variable | Perceived loudness | [1, 50] |
Predicted auditory loudness level | Predicted loudness | |
A-weighted equivalent continuous sound pressure level | LAeq | |
A-weighted five percent exceedance level | LAF5 | |
Preference of the most salient sound | Salient sound preference | very disinclined — very inclined [−3, 3] |
Whom the sound source of the most salient sound belongs to | Salient source ownership | 1 Participant or his family (yes) |
2 Others (no) | ||
Pleasantness of the indoor soundscape | Soundscape pleasantness | [−10, 10] |
Eventfulness of the indoor soundscape | Soundscape eventfulness | [−10, 10] |
Soundscape composition: Cumulated sensitivity of all audible | SC Nature | not noticeable at all — extremely noticeable; [0, 10] |
sounds of each of the eight noise categories | SC Human | not noticeable at all — extremely noticeable; [0, 10] |
SC Household (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Installation (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Signals (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Traffic (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Speech | not noticeable at all — extremely noticeable; [0, 10] | |
SC Music/singing | not noticeable at all — extremely noticeable; [0, 10] |
. | Variable name . | Range or levels . |
---|---|---|
Sound | ||
Dependent variable | Perceived loudness | [1, 50] |
Predicted auditory loudness level | Predicted loudness | |
A-weighted equivalent continuous sound pressure level | LAeq | |
A-weighted five percent exceedance level | LAF5 | |
Preference of the most salient sound | Salient sound preference | very disinclined — very inclined [−3, 3] |
Whom the sound source of the most salient sound belongs to | Salient source ownership | 1 Participant or his family (yes) |
2 Others (no) | ||
Pleasantness of the indoor soundscape | Soundscape pleasantness | [−10, 10] |
Eventfulness of the indoor soundscape | Soundscape eventfulness | [−10, 10] |
Soundscape composition: Cumulated sensitivity of all audible | SC Nature | not noticeable at all — extremely noticeable; [0, 10] |
sounds of each of the eight noise categories | SC Human | not noticeable at all — extremely noticeable; [0, 10] |
SC Household (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Installation (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Signals (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Traffic (technical) | not noticeable at all — extremely noticeable; [0, 10] | |
SC Speech | not noticeable at all — extremely noticeable; [0, 10] | |
SC Music/singing | not noticeable at all — extremely noticeable; [0, 10] |
2. Sound-related perceptual predictors
(Color online) Screenshot of the assessment of Perceived loudness of the indoor acoustic environment using a combination of a verbal categorical and a numerical scale for partitioning when using a survey app on a smartphone.
(Color online) Screenshot of the assessment of Perceived loudness of the indoor acoustic environment using a combination of a verbal categorical and a numerical scale for partitioning when using a survey app on a smartphone.
Additionally, participants described the sound environment as a composition of sounds by assigning all audible sounds to one of the eight sound categories. Participants then indicated the saliency of each category using 11-level Likert scales (refer to Soundscape composition in Table II and Fig. 2).
(Color online) Smartphone screenshot displaying the assessment of the Soundscape composition, for which participants rated the perceived accumulated salience of all sound sources related to the eight sound categories: Nature, Human, Household appliances (indoors), house Installation/heating/ventilation (indoors), Signals/ringing tones/alarms/information tones, Traffic/construction work/industry (outdoors), Speech, and Music/singing.
(Color online) Smartphone screenshot displaying the assessment of the Soundscape composition, for which participants rated the perceived accumulated salience of all sound sources related to the eight sound categories: Nature, Human, Household appliances (indoors), house Installation/heating/ventilation (indoors), Signals/ringing tones/alarms/information tones, Traffic/construction work/industry (outdoors), Speech, and Music/singing.
Finally, participants identified the most salient sound of the acoustic environment and rated its Salient sound preference on a verbal seven-level Likert scale ranging from very disinclined to very inclined. Then, they reported the Salient source ownership, i.e., who owned the associated sound source. Due to the highly heterogeneous distribution, which would have caused problems in statistical evaluations, the original four response options—You personally, Your family, Your neighbors, and Someone else/something else/unknown—were aggregated into two categories: the Participant or his or her family (yes) or Others (no).
3. Situational predictors
Of the non-auditory time-varying predictors changing from situation to situation (Table III), the momentary affective state was described by Valence and Arousal using the circumplex model of affect (Posner , 2005). This state was assessed directly after the questionnaire's start, with one continuous slider each; this slider defaulted to the middle of the scale and had to be touched or moved to proceed to the next question. In addition to Arousal, Wakefulness was assessed with a continuous slider to capture the perceived level of feeling of tired–awake (in the style of Steyer, 1997; Steyer , 1997a) because these yielded descriptions of different—though not independent—dimensions (Hinz , 2012). This distinction is meaningful when, for example, one person feels tired (i.e., low alertness level) yet experiences a high activation level because they must complete a time-sensitive or important task while exhausted. Another meaningful example is if a person feels tired yet relaxed (i.e., low arousal level) because of a relaxing, upcoming free weekend after a busy work week.
Situational predictors and their ranges or factor levels. In bold: the factor level containing the most reports. For questions and original German texts, see the supplementary material (Ref. 1) taken from the published dataset (Versümer , 2023).
. | Variable name . | Range or levels . |
---|---|---|
Situation | ||
State affect | Valence | Negative mood — neutral — positive mood; [−5, 5] |
Arousal | No activation — strong activation; [0, 10] | |
Wakefulness | Tired/limp — awake/chipper; [−5, 5] | |
Perceived control over the sound situation | Control | No control — complete control; [1, 7] |
The cognitive and physical load imposed by the activity | Cognitive load | Very little — very much; [0, 10] |
Physical load | ||
The activity immediately before reporting | Activity | 1 Cooking/housework/workout |
2 Concentrated mental work | ||
3 Social interaction | ||
4 Sleeping/relaxing | ||
Recording time (in hours) | RT DEN | [07, 19] Day |
[19, 23] Evening | ||
[23, 07] Night |
. | Variable name . | Range or levels . |
---|---|---|
Situation | ||
State affect | Valence | Negative mood — neutral — positive mood; [−5, 5] |
Arousal | No activation — strong activation; [0, 10] | |
Wakefulness | Tired/limp — awake/chipper; [−5, 5] | |
Perceived control over the sound situation | Control | No control — complete control; [1, 7] |
The cognitive and physical load imposed by the activity | Cognitive load | Very little — very much; [0, 10] |
Physical load | ||
The activity immediately before reporting | Activity | 1 Cooking/housework/workout |
2 Concentrated mental work | ||
3 Social interaction | ||
4 Sleeping/relaxing | ||
Recording time (in hours) | RT DEN | [07, 19] Day |
[19, 23] Evening | ||
[23, 07] Night |
Cognitive load and Physical load were each assessed to examine the possible dependence of loudness ratings on the task participants were engaged in before each poll. This process was facilitated by an adaption of the NASA Task Load Index, developed to examine the performance of individuals driving or operating machines, vehicles, or aircrafts (NASA TLX; Hart, 2006). In addition, participants answered the question, “How much control do you personally have over the sound situation you report?” by reporting their perceived Control over the sound situation using a verbal, seven-level Likert scale that ranged from no control to complete control. Finally, participants reported the Activity they were engaged in immediately before conducting the hourly assessment using a nominal scale with four response options: cooking/housework/workout, concentrated mental work, social interaction, and sleeping/relaxing. The recording times from the time stamp saved with each assessment were clustered into three time ranges according to ISO 1996-2:2017 (ISO, 2017c), day (07 to 19 h), evening (19 to 23 h), and night (23 to 07 h).
4. Person-related and socio-economic predictors
Concerning the non-auditory, relatively temporally stable person-related and socio-economic predictors (Table IV), mean Noise sensitivity was measured using the German version (see Table II.9 of Eikmann , 2015) of the NoiSeQ-R (Schütte , 2007). This was accomplished by averaging the three subscales for noise sensitivity regarding sleep, work, and habitation, queried using four items, each with four-level Likert scales. Participants' Hearing impairment was measured for both ears across the octaves from 250 Hz to 8 kHz using the Audiometer (HEAD acoustics GmbH, Herzogenrath, Germany) with Sennheiser HDA-300 headphones (Sennheiser, Wedemark, Germany) using the value of the most hearing loss for both ears in each frequency band. In addition, the perceived general Health status was assessed using a single-item question: “How in general would you rate your health?” and a five-level scale ranging from bad to good. Single-item measures for the general health status yield a sufficient measure for research when briefness is adequate and different health aspects are not of particular interest (Idler and Benyamini, 1997; Bowling, 2005; Radun , 2019).
Person-related and socio-economic measures and their ranges or factor levels. In bold: the factor level containing the most reports. For question texts, scale types, and original German texts, see the supplementary material (Ref. 1) taken from the published dataset (Versümer , 2023).
. | Variable name . | Range or levels . |
---|---|---|
Person | ||
Demographics | Age | [0, ∞] |
Gender | Female (f), male (m), non-binary (d) | |
Mean noise sensitivitya | Noise sensitivity | low — high; [0, 3] |
Hearing impairment | Hearing impairment | 0 less than or equal to 20 dB HL |
1 over 20 and to 35 dB HL | ||
2 over 35 dB HL | ||
Health, well-being, and general anxiety disorder | Health | Bad — good; [1, 5] |
Well-being | Worst — best imaginable well-being; [0, 100] | |
Anxiety | [0, 21] | |
Three-dimensional person mood traitsa | Trait mood | Good — bad; [8, 40] |
Trait wakefulness | Awake — tired; [8, 40] | |
Trait rest | Calm — nervous; [8, 40] | |
Socio-economics/living environment | Neighbors above | Yes, no |
Neighbors below | Yes, no | |
Neighbors next door | Yes, no | |
Children | Yes, no | |
People in household | [1, 10] |
. | Variable name . | Range or levels . |
---|---|---|
Person | ||
Demographics | Age | [0, ∞] |
Gender | Female (f), male (m), non-binary (d) | |
Mean noise sensitivitya | Noise sensitivity | low — high; [0, 3] |
Hearing impairment | Hearing impairment | 0 less than or equal to 20 dB HL |
1 over 20 and to 35 dB HL | ||
2 over 35 dB HL | ||
Health, well-being, and general anxiety disorder | Health | Bad — good; [1, 5] |
Well-being | Worst — best imaginable well-being; [0, 100] | |
Anxiety | [0, 21] | |
Three-dimensional person mood traitsa | Trait mood | Good — bad; [8, 40] |
Trait wakefulness | Awake — tired; [8, 40] | |
Trait rest | Calm — nervous; [8, 40] | |
Socio-economics/living environment | Neighbors above | Yes, no |
Neighbors below | Yes, no | |
Neighbors next door | Yes, no | |
Children | Yes, no | |
People in household | [1, 10] |
Six participants left one of the multiple items of a trait scale blank, resulting in missing values in the data set. The mean value of the remaining items of the same scale replaced these values.
Then, the participants averaged over the past two weeks their impression of the following items: Psychological Well-being was assessed using a German version of the WHO-5 questionnaire (Bech, 1999; Topp , 2015) which serves as a valid and internationally accepted time-efficient measure (e.g., used in an investigation of screening characteristics for depressed mood in type 1 diabetic patients by Kulzer , 2006). Accordingly, the answers to five six-level Likert scales were added, ranging from At no time (0) to All of the time (5). Participants assessed their state of Anxiety using the GAD-7 questionnaire, developed to be a reliable measure for anxiety disorders (Kroenke , 2007) and which was transferred into German via verified translation by back-translation for this study. Therefore, this study added seven four-level Likert scales, ranging from Not at all (0) to Nearly every day (3). Finally, the German Multidimensional Mood State Questionnaire (for the English translation of the German “Mehrdimensionaler Befindlichkeitsfragebogen,” MDBF, see Steyer, 1997) served for measuring participants' three-dimensional mood (trait affect) of the past two weeks. It consists of three dimensions: Trait mood (good—bad), Trait wakefulness (awake—tired), and Trait rest (calm—nervous) (adapted from Steyer , 1997a). It is assessed by averaging eight verbal five-level Likert scales for each dimension. Finally, demographic [i.e., Age (in years), Gender (i.e., female, male, non-binary)] and socio-economic data (i.e., Neighbors living above/below/next door, number of People in [the] household, and Children being present or not) were collected.
C. Materials
The low-cost, low-self-noise binaural audio recording devices were developed at the University of Applied Sciences Düsseldorf specifically for this study. Both electret microphones fit into standard earbuds (with speaker and rubber plug removed and microphones pointing outward; see Fig. 3) and provided a low self-noise level of less than 19 dB(A), as shown in the spectra in Fig. 4. Participants placed the earbuds loosely in their cavum conchae (approximately ±90-degree azimuth) to ensure it was not hermetically sealed. All recordings, performed at a sampling frequency of 44.1 kHz and an amplitude resolution of 16 Bits, were saved to a memory card and archived and removed after each participation. Reference recordings were also made for calibration purposes. That is, both microphones and a sound level meter were placed next to each other in a closed wooden box, including a small loudspeaker (8 cm in diameter in a separate loudspeaker housing) to record a sine wave of 94 dB with a frequency of 100 Hz (to avoid acoustical modes). These two-channel recordings were used to calculate calibration factors applied to all a device's binaural recordings (i.e., all frequencies were adjusted equally). In addition, the filtering of all calibrated recordings compensated for the implemented analogue high-pass filter of the recording devices.
(Color online) Binaural recorder with microphones built into the earbuds from which the speakers and rubber plugs were removed. The simple user interface allows for switching the device on or off entirely with the black rocker power switch, starting a recording by pressing the red push button shortly, and stopping a running recording or deleting the last recording by pressing the push button for more than two seconds.
(Color online) Binaural recorder with microphones built into the earbuds from which the speakers and rubber plugs were removed. The simple user interface allows for switching the device on or off entirely with the black rocker power switch, starting a recording by pressing the red push button shortly, and stopping a running recording or deleting the last recording by pressing the push button for more than two seconds.
(Color online) The third-octave spectra (A-weighted, 50% overlapping using a Hanning window) for one recording in the anechoic room of the University of Applied Sciences Düsseldorf. The red and black curve represents both channels of one low-self-noise binaural recording device developed at the university. The green curve represents a ½″ low-self-noise measurement microphone (type GRASS 47HC).
(Color online) The third-octave spectra (A-weighted, 50% overlapping using a Hanning window) for one recording in the anechoic room of the University of Applied Sciences Düsseldorf. The red and black curve represents both channels of one low-self-noise binaural recording device developed at the university. The green curve represents a ½″ low-self-noise measurement microphone (type GRASS 47HC).
All perceptual sound-related and situational ratings were performed using a Nokia 4.2 smartphone (Nokia, Espoo, Finland) and a survey app (movisensXS, movisens GmbH, Karlsruhe, Germany) (movisens, 2020) that enables complex query procedures and makes the evaluated data immediately available to the study administration. Furthermore, the questions were presented in the same order for all participants, and the items were arranged identically. In other words, participants could only step forward and thus could not alter entries made on previous questionnaire pages.
D. Procedure and participant task
Participants received a standardized introduction during individual appointments at the University of Applied Sciences Düsseldorf and completed an audiometry test (250 Hz–8 kHz octaves). Four trained instructors guided the participants through the questionnaire that obtained the person-related and socio-economic items. In addition, participants received a survey smartphone and a binaural audio recording device. Answers to frequently asked questions and general help regarding the study were provided before and made accessible throughout participation via the survey smartphones. A separate smartphone with a training survey also helped familiarize participants with their smartphone, the operation of the survey app, all possible rating scales and input types, and the help section. Finally, instructors introduced the participants to the structure of the Experience Sampling Method questionnaire, comprehension of the questions and scales, and the definitions of the eight sound categories.
At home, participants paired the study smartphones with their private Wi-Fi to ensure that their survey results were automatically uploaded. After starting the survey app, participants set the daily time range in which they liked to be asked to answer the questionnaire. Periodical alarms began approximately 15 min (±5 min) after the beginning of the daily time range, with additional ones following roughly every hour (±10 min). Participants could accept the alarm, delay it by five minutes twice, reject it altogether, or ignore it. Ignored notices were repeated twice. In cases where participants spent only a few hours per day at home or were absent for a few workdays or a weekend, they could independently initialize the assessment to reach the target number during their time at home. Despite tasking participants to conduct assessments on hourly reminders whenever possible, they were likelier to do so independently (73% were self-initiated). Thirty-five participants self-initiated more than 90% of their assessments, with 14 doing so in all cases.
Participants first indicated whether they could hear anything when accepting the alarm. If yes, the questionnaire would continue. Otherwise, it would terminate because the subsequent questions about the most prominent sound would be unanswerable and have resulted in undefined values that could complicate using the predictors in statistical analyses. Participants would then answer affective state questions to capture their emotional state, preferably independent of a possible emotional impact from responding to the survey. After activating the recording mode, the recording was delayed by 5 s to allow participants to breathe deeply and remain still and motionless during the 15-s recording. The delay also ensured that pressing the button would not become an audible part of the recording itself. The first of the three main sections followed—evaluating the most salient sound—which this contribution will not discuss in detail. In the second part, participants reported on the overall indoor sound environment. Finally, the third part dealt with the time-varying situational predictors, complementing the questions about the affective state asked at the beginning.
After their ten-day participation, participants received a staggered compensation of up to 100 Euros. They received 20 Euros for participating in the introduction at the university, 30 Euros for evaluating 45 sound situations, and 2 Euros for each additional contribution, but not more than 100 Euros for a total of 70 evaluations. Participants could report more beyond that without receiving further compensation.
E. Statistical analyses
Statistical analyses were performed using R (v4.2.0; R Core Team, 2022), Rstudio (v22.2.3.492; Rstudio Team, 2022), and jamovi (v2.2.5; The jamovi project, 2022). For the calculation of the prediction models, all variables, including dummy variables, were z-standardized to achieve equal weighting of the estimates. Linear mixed-effects models were calculated using the lme4 R package (v1.1-30; Bates , 2015). A hands-on introduction to linear mixed-effects models, built-in R, can be found in Winter (2013). All models consider the hierarchical data structure by clustering the data using the participant's ID, which results in random intercepts, i.e., each participant is assigned a different intercept value, which the model estimates. Generally, linear mixed-effects models also allow for random slopes by including predictors (already used as fixed effects) as random effects, i.e., for each participant, different slopes are determined for the predictors that serve as random effects. Specifically, the models used in this study were based on either the Predicted loudness level (LPL), the A-weighted equivalent continuous sound pressure level (LAeq), or the A-weighted five percent exceedance sound pressure level (LAF5). The single sound-related acoustic predictor served as the main fixed effect and the single random effect, allowing for random slopes regarding the sound-related acoustic predictor. Different sets of variables from the three domains (Table II–IV) are added successively.
The final fit was based on the restricted maximum likelihood estimation (REML). Satterthwaite approximation, implemented in the lmerTest R package (v3.1.3; Kuznetsova , 2017), estimated the degrees of freedom. Bootstrapped estimates were derived using 1000 iterations to improve the robustness of the models and because the data did not meet all requirements for linear regressions. From the resulting distributions of each estimate, p-values and confidence intervals were calculated at the significance level of α = 0.05. The statistical analysis made no adjustments (i.e., to reduce the family-wise error rate) because of the differences between the 15 models regarding the number of predictors. The analysis thus accepts possible inflation of type I errors, while ensuring test validity and no inflation of type II errors (Rothman, 1990). Marginal and conditional coefficients of determination (R2m and R2c) were calculated using the R package performance (v0.9.2; Nakagawa , 2017). Variance Inflation Factors (VIF) (Zuur , 2010) were calculated for each estimate using the car R package (v3.1–0; Fox and Weisberg, 2019) to ensure acceptable low correlations between the variables of each model (i.e., avoiding multicollinearity). In addition, model comparisons were conducted based on the Akaike Information Criterion (AIC). The fixed effects for the comprehensive models with the smallest AIC were then analyzed based on the probability value, the estimate's sign, and the absolute value to identify crucial predictors. Then, we calculated two two-way and one three-way interaction effects for selected predictors to investigate the influence of Arousal and Cognitive load on the effect of the Activity concentrated mental work on Perceived loudness. Finally, the interaction effects were added to the full model containing all main effects (model LPL.bpsp from Table V and Table VI) and presented in Table VII.
Linear mixed-effects models based on the three sound-related acoustic predictors, Predicted loudness level (LPL), LAeq, and LAF5: .b, Baseline model with one acoustic predictor as the fixed and random effect (for the calculation of individual random slopes) and participant's ID as the cluster variable. .bp, Previous model plus predictors on participant's perceptual ratings of the perceived sound environment. .bps and .bpp, Previous model plus situational or person-related (and socio-economic) predictors. .bpsp, model with all predictors together. Variables are described in Tables II–IV, and Sec. II B. τ00, random intercept (between-subject) variance (i.e., variation between individual intercepts and average intercept). ICCadj, Adjusted intraclass-correlation coefficient = τ00/(τ00 + σ2), the proportion of the variance between participants over total variance. ρ01, correlation between random intercepts and random slopes. NID = 105. NObservations = 6594. α = 0.05.
. | Sound-related . | Non-auditory . | Both . | ||
---|---|---|---|---|---|
Fixed and random effect . | LPL . | ||||
Additional fixed effects . | . | Perception . | Perception, situation . | Perception, person . | Perception, situation, person . |
Model . | LPL.b . | LPL.bp . | LPL.bps . | LPL.bpp . | LPL.bpsp . |
Variances | |||||
R2m | 0.344 | 0.485 | 0.489 | 0.503 | 0.507 |
R2c | 0.527 | 0.638 | 0.638 | 0.639 | 0.640 |
σ2 | 0.529 | 0.410 | 0.408 | 0.410 | 0.408 |
τ00.ID | 0.168 | 0.148 | 0.144 | 0.130 | 0.127 |
τ11.ID.Calculated_Loudness | 0.037 | 0.025 | 0.024 | 0.025 | 0.024 |
ρ01 | 0.153 | 0.075 | 0.072 | −0.020 | −0.021 |
ICCadj | 0.279 | 0.296 | 0.291 | 0.273 | 0.269 |
Model fit | |||||
AICREML | 14991.0 | 13425.3 | 13468.3 | 13498.3 | 13542.3 |
VIFmax | 1.97 | 2.75 | 4.00 | 4.00 |
. | Sound-related . | Non-auditory . | Both . | ||
---|---|---|---|---|---|
Fixed and random effect . | LPL . | ||||
Additional fixed effects . | . | Perception . | Perception, situation . | Perception, person . | Perception, situation, person . |
Model . | LPL.b . | LPL.bp . | LPL.bps . | LPL.bpp . | LPL.bpsp . |
Variances | |||||
R2m | 0.344 | 0.485 | 0.489 | 0.503 | 0.507 |
R2c | 0.527 | 0.638 | 0.638 | 0.639 | 0.640 |
σ2 | 0.529 | 0.410 | 0.408 | 0.410 | 0.408 |
τ00.ID | 0.168 | 0.148 | 0.144 | 0.130 | 0.127 |
τ11.ID.Calculated_Loudness | 0.037 | 0.025 | 0.024 | 0.025 | 0.024 |
ρ01 | 0.153 | 0.075 | 0.072 | −0.020 | −0.021 |
ICCadj | 0.279 | 0.296 | 0.291 | 0.273 | 0.269 |
Model fit | |||||
AICREML | 14991.0 | 13425.3 | 13468.3 | 13498.3 | 13542.3 |
VIFmax | 1.97 | 2.75 | 4.00 | 4.00 |
Fixed and random effect . | LAF5 . | ||||
---|---|---|---|---|---|
Additional fixed effects . | . | Perception . | Perception, situation . | Perception, person . | Perception, situation, person . |
Model . | LAF5.b . | LAF5.bp . | LAF5.bps . | LAF5.bpp . | LAF5.bpsp . |
Variances | |||||
R2m | 0.318 | 0.471 | 0.475 | 0.490 | 0.494 |
R2c | 0.509 | 0.625 | 0.626 | 0.626 | 0.627 |
σ2 | 0.547 | 0.421 | 0.418 | 0.421 | 0.418 |
τ00.ID | 0.166 | 0.144 | 0.140 | 0.124 | 0.122 |
τ11.ID.LAF5 | 0.046 | 0.029 | 0.028 | 0.029 | 0.028 |
ρ01 | 0.291 | 0.178 | 0.180 | 0.105 | 0.103 |
ICCadj | 0.279 | 0.292 | 0.287 | 0.267 | 0.263 |
Model fit | |||||
AICREML | 15221.2 | 13598.7 | 13631.4 | 13672.2 | 13706.1 |
VIFmax | 1.96 | 2.75 | 3.98 | 3.99 |
Fixed and random effect . | LAF5 . | ||||
---|---|---|---|---|---|
Additional fixed effects . | . | Perception . | Perception, situation . | Perception, person . | Perception, situation, person . |
Model . | LAF5.b . | LAF5.bp . | LAF5.bps . | LAF5.bpp . | LAF5.bpsp . |
Variances | |||||
R2m | 0.318 | 0.471 | 0.475 | 0.490 | 0.494 |
R2c | 0.509 | 0.625 | 0.626 | 0.626 | 0.627 |
σ2 | 0.547 | 0.421 | 0.418 | 0.421 | 0.418 |
τ00.ID | 0.166 | 0.144 | 0.140 | 0.124 | 0.122 |
τ11.ID.LAF5 | 0.046 | 0.029 | 0.028 | 0.029 | 0.028 |
ρ01 | 0.291 | 0.178 | 0.180 | 0.105 | 0.103 |
ICCadj | 0.279 | 0.292 | 0.287 | 0.267 | 0.263 |
Model fit | |||||
AICREML | 15221.2 | 13598.7 | 13631.4 | 13672.2 | 13706.1 |
VIFmax | 1.96 | 2.75 | 3.98 | 3.99 |
Fixed and random effect . | LAeq . | ||||
---|---|---|---|---|---|
Additional fixed effects . | . | Perception . | Perception, situation . | Perception, person . | Perception, situation, person . |
Model . | LAeq.b . | LAeq.bp . | LAeq.bps . | LAeq.bpp . | LAeq.bpsp . |
Variances | |||||
R2m | 0.326 | 0.477 | 0.481 | 0.494 | 0.498 |
R2c | 0.517 | 0.632 | 0.632 | 0.632 | 0.633 |
σ2 | 0.541 | 0.416 | 0.413 | 0.416 | 0.413 |
τ00.ID | 0.169 | 0.147 | 0.143 | 0.129 | 0.126 |
τ11.ID.Laeq | 0.044 | 0.028 | 0.027 | 0.028 | 0.026 |
ρ01 | 0.301 | 0.194 | 0.192 | 0.123 | 0.118 |
ICCadj | 0.283 | 0.296 | 0.291 | 0.273 | 0.269 |
Model fit | |||||
AICREML | 15146.5 | 13515.2 | 13556.5 | 13590.0 | 13632.3 |
VIFmax | 1.96 | 2.75 | 3.97 | 3.98 |
Fixed and random effect . | LAeq . | ||||
---|---|---|---|---|---|
Additional fixed effects . | . | Perception . | Perception, situation . | Perception, person . | Perception, situation, person . |
Model . | LAeq.b . | LAeq.bp . | LAeq.bps . | LAeq.bpp . | LAeq.bpsp . |
Variances | |||||
R2m | 0.326 | 0.477 | 0.481 | 0.494 | 0.498 |
R2c | 0.517 | 0.632 | 0.632 | 0.632 | 0.633 |
σ2 | 0.541 | 0.416 | 0.413 | 0.416 | 0.413 |
τ00.ID | 0.169 | 0.147 | 0.143 | 0.129 | 0.126 |
τ11.ID.Laeq | 0.044 | 0.028 | 0.027 | 0.028 | 0.026 |
ρ01 | 0.301 | 0.194 | 0.192 | 0.123 | 0.118 |
ICCadj | 0.283 | 0.296 | 0.291 | 0.273 | 0.269 |
Model fit | |||||
AICREML | 15146.5 | 13515.2 | 13556.5 | 13590.0 | 13632.3 |
VIFmax | 1.96 | 2.75 | 3.97 | 3.98 |
Z-standardized estimates and probabilities for all effects of the models LPL.bp, LPL.bps, LPL.bpp, and LPL.bpsp. α = 0.05. Reference levels: RT DEN day; Hearing impairment 0; Gender female; Activity sleeping/relaxing. Bootstrapped confidence intervals are located in the supplementary material (Ref. 1).
Model . | LPL.bp . | LPL.bps . | LPL.bpp . | LPL.bpsp . | . | . | ||||
---|---|---|---|---|---|---|---|---|---|---|
Predictors . | β . | p . | β . | p . | β . | p . | β . | p . | . | . |
Predicted loudness | 0.455 | <0.001 | 0.447 | <0.001 | 0.456 | <0.001 | 0.447 | <0.001 | Time-varying | Sound-related, acousticSound-related, perceptual |
Salient sound preference | −0.036 | <0.001 | −0.042 | <0.001 | −0.036 | <0.001 | −0.042 | <0.001 | ||
Salient source ownership, yes | 0.018 | 0.086 | 0.009 | 0.432 | 0.017 | 0.102 | 0.008 | 0.450 | ||
Soundscape pleasantness | −0.229 | <0.001 | −0.237 | <0.001 | −0.229 | <0.001 | −0.237 | <0.001 | ||
Soundscape eventfulness | 0.263 | <0.001 | 0.261 | <0.001 | 0.262 | <0.001 | 0.261 | <0.001 | ||
SC Nature | 0.043 | <0.001 | 0.048 | <0.001 | 0.044 | <0.001 | 0.049 | <0.001 | ||
SC Human | −0.012 | 0.210 | −0.011 | 0.242 | −0.011 | 0.234 | −0.011 | 0.262 | ||
SC Household | 0.074 | <0.001 | 0.068 | <0.001 | 0.075 | <0.001 | 0.069 | <0.001 | ||
SC Installation | 0.029 | 0.006 | 0.030 | 0.002 | 0.031 | 0.002 | 0.031 | 0.002 | ||
SC Signals | 0.041 | <0.001 | 0.040 | <0.001 | 0.042 | <0.001 | 0.041 | <0.001 | ||
SC Traffic | 0.097 | <0.001 | 0.093 | <0.001 | 0.098 | <0.001 | 0.093 | <0.001 | ||
SC Speech | 0.093 | <0.001 | 0.090 | <0.001 | 0.094 | <0.001 | 0.091 | <0.001 | ||
SC Music | 0.106 | <0.001 | 0.100 | <0.001 | 0.108 | <0.001 | 0.101 | <0.001 | ||
Activity cooking/housework/workout | −0.038 | <0.001 | −0.037 | <0.001 | Situational | |||||
Activity concentrated mental work | −0.053 | <0.001 | −0.053 | <0.001 | ||||||
Activity social interaction | −0.025 | 0.018 | −0.025 | 0.018 | ||||||
RT DEN evening | −0.014 | 0.078 | −0.014 | 0.076 | ||||||
RT DEN night | −0.010 | 0.290 | −0.010 | 0.292 | ||||||
Valence | 0.005 | 0.654 | 0.005 | 0.644 | ||||||
Arousal | 0.029 | 0.018 | 0.029 | 0.018 | ||||||
Wakefulness | −0.002 | 0.870 | −0.002 | 0.852 | ||||||
Control | 0.047 | <0.001 | 0.046 | <0.001 | ||||||
Cognitive load | 0.059 | <0.001 | 0.058 | <0.001 | ||||||
Physical load | 0.010 | 0.392 | 0.009 | 0.446 | ||||||
Age | −0.142 | <0.001 | −0.138 | 0.002 | Temporally stable | Person-related | ||||
Gender male | 0.027 | 0.522 | 0.022 | 0.594 | ||||||
Noise sensitivity | −0.080 | 0.052 | −0.080 | 0.052 | ||||||
Health | −0.087 | 0.052 | −0.082 | 0.060 | ||||||
Well-being | 0.001 | 0.990 | −0.007 | 0.918 | ||||||
Anxiety | −0.106 | 0.136 | −0.095 | 0.168 | ||||||
Trait mood | −0.042 | 0.578 | −0.038 | 0.602 | ||||||
Trait wakefulness | 0.111 | 0.050 | 0.106 | 0.056 | ||||||
Trait rest | −0.074 | 0.272 | −0.064 | 0.374 | ||||||
Hearing impairment 1 | 0.086 | 0.042 | 0.081 | 0.060 | ||||||
Hearing impairment 2 | 0.021 | 0.674 | 0.020 | 0.676 | ||||||
Neighbors above, no | −0.037 | 0.352 | −0.037 | 0.358 | Socio-economical | |||||
Neighbors below, no | 0.036 | 0.406 | 0.032 | 0.454 | ||||||
Neighbors next door, no | −0.090 | 0.012 | −0.092 | 0.012 | ||||||
Children, yes | −0.012 | 0.794 | −0.015 | 0.758 | ||||||
People in household | 0.062 | 0.264 | 0.067 | 0.242 |
Model . | LPL.bp . | LPL.bps . | LPL.bpp . | LPL.bpsp . | . | . | ||||
---|---|---|---|---|---|---|---|---|---|---|
Predictors . | β . | p . | β . | p . | β . | p . | β . | p . | . | . |
Predicted loudness | 0.455 | <0.001 | 0.447 | <0.001 | 0.456 | <0.001 | 0.447 | <0.001 | Time-varying | Sound-related, acousticSound-related, perceptual |
Salient sound preference | −0.036 | <0.001 | −0.042 | <0.001 | −0.036 | <0.001 | −0.042 | <0.001 | ||
Salient source ownership, yes | 0.018 | 0.086 | 0.009 | 0.432 | 0.017 | 0.102 | 0.008 | 0.450 | ||
Soundscape pleasantness | −0.229 | <0.001 | −0.237 | <0.001 | −0.229 | <0.001 | −0.237 | <0.001 | ||
Soundscape eventfulness | 0.263 | <0.001 | 0.261 | <0.001 | 0.262 | <0.001 | 0.261 | <0.001 | ||
SC Nature | 0.043 | <0.001 | 0.048 | <0.001 | 0.044 | <0.001 | 0.049 | <0.001 | ||
SC Human | −0.012 | 0.210 | −0.011 | 0.242 | −0.011 | 0.234 | −0.011 | 0.262 | ||
SC Household | 0.074 | <0.001 | 0.068 | <0.001 | 0.075 | <0.001 | 0.069 | <0.001 | ||
SC Installation | 0.029 | 0.006 | 0.030 | 0.002 | 0.031 | 0.002 | 0.031 | 0.002 | ||
SC Signals | 0.041 | <0.001 | 0.040 | <0.001 | 0.042 | <0.001 | 0.041 | <0.001 | ||
SC Traffic | 0.097 | <0.001 | 0.093 | <0.001 | 0.098 | <0.001 | 0.093 | <0.001 | ||
SC Speech | 0.093 | <0.001 | 0.090 | <0.001 | 0.094 | <0.001 | 0.091 | <0.001 | ||
SC Music | 0.106 | <0.001 | 0.100 | <0.001 | 0.108 | <0.001 | 0.101 | <0.001 | ||
Activity cooking/housework/workout | −0.038 | <0.001 | −0.037 | <0.001 | Situational | |||||
Activity concentrated mental work | −0.053 | <0.001 | −0.053 | <0.001 | ||||||
Activity social interaction | −0.025 | 0.018 | −0.025 | 0.018 | ||||||
RT DEN evening | −0.014 | 0.078 | −0.014 | 0.076 | ||||||
RT DEN night | −0.010 | 0.290 | −0.010 | 0.292 | ||||||
Valence | 0.005 | 0.654 | 0.005 | 0.644 | ||||||
Arousal | 0.029 | 0.018 | 0.029 | 0.018 | ||||||
Wakefulness | −0.002 | 0.870 | −0.002 | 0.852 | ||||||
Control | 0.047 | <0.001 | 0.046 | <0.001 | ||||||
Cognitive load | 0.059 | <0.001 | 0.058 | <0.001 | ||||||
Physical load | 0.010 | 0.392 | 0.009 | 0.446 | ||||||
Age | −0.142 | <0.001 | −0.138 | 0.002 | Temporally stable | Person-related | ||||
Gender male | 0.027 | 0.522 | 0.022 | 0.594 | ||||||
Noise sensitivity | −0.080 | 0.052 | −0.080 | 0.052 | ||||||
Health | −0.087 | 0.052 | −0.082 | 0.060 | ||||||
Well-being | 0.001 | 0.990 | −0.007 | 0.918 | ||||||
Anxiety | −0.106 | 0.136 | −0.095 | 0.168 | ||||||
Trait mood | −0.042 | 0.578 | −0.038 | 0.602 | ||||||
Trait wakefulness | 0.111 | 0.050 | 0.106 | 0.056 | ||||||
Trait rest | −0.074 | 0.272 | −0.064 | 0.374 | ||||||
Hearing impairment 1 | 0.086 | 0.042 | 0.081 | 0.060 | ||||||
Hearing impairment 2 | 0.021 | 0.674 | 0.020 | 0.676 | ||||||
Neighbors above, no | −0.037 | 0.352 | −0.037 | 0.358 | Socio-economical | |||||
Neighbors below, no | 0.036 | 0.406 | 0.032 | 0.454 | ||||||
Neighbors next door, no | −0.090 | 0.012 | −0.092 | 0.012 | ||||||
Children, yes | −0.012 | 0.794 | −0.015 | 0.758 | ||||||
People in household | 0.062 | 0.264 | 0.067 | 0.242 |
Z-standardized estimates and probabilities for all significant effects of the full model LPL.bpsp including Arousal–Activity, Cognitive load–Activity, and Arousal–Cognitive load–Activity interaction effects (for readability reasons only for the activity level concentrated mental work). Α = 0.05. Reference level: Activity sleeping/relaxing. The original models and bootstrapped confidence intervals can be found in the supplementary material (Ref. 1).
Model . | LPL.bpsp:A . | LPL.bpsp:CL . | LPL.bpsp:A:CL . | |||
---|---|---|---|---|---|---|
Predictors . | β . | p . | β . | p . | β . | p . |
Predicted loudness | 0.447 | <0.001 | 0.446 | <0.001 | 0.447 | <0.001 |
Salient sound preference | −0.042 | <0.001 | −0.043 | <0.001 | −0.042 | <0.001 |
Soundscape pleasantness | −0.237 | <0.001 | −0.235 | <0.001 | −0.237 | <0.001 |
Soundscape eventfulness | 0.260 | <0.001 | 0.263 | <0.001 | 0.260 | <0.001 |
SC Nature | 0.049 | <0.001 | 0.050 | <0.001 | 0.051 | <0.001 |
SC Household | 0.068 | <0.001 | 0.069 | <0.001 | 0.068 | <0.001 |
SC Installation | 0.031 | 0.002 | 0.033 | 0.002 | 0.030 | 0.002 |
SC Signals | 0.041 | <0.001 | 0.042 | <0.001 | 0.041 | <0.001 |
SC Traffic | 0.093 | <0.001 | 0.093 | <0.001 | 0.092 | <0.001 |
SC Speech | 0.090 | <0.001 | 0.091 | <0.001 | 0.090 | <0.001 |
SC Music | 0.102 | <0.001 | 0.102 | <0.001 | 0.102 | <0.001 |
Activity cooking/housework/workout | −0.032 | 0.006 | −0.036 | 0.010 | −0.031 | 0.008 |
Activity concentrated mental work | −0.059 | <0.001 | −0.066 | <0.001 | −0.058 | <0.001 |
Activity social interaction | −0.020 | 0.064 | −0.018 | 0.136 | −0.021 | 0.046 |
Arousal | 0.029 | 0.020 | 0.028 | 0.020 | 0.006 | 0.640 |
Control | 0.045 | <0.001 | 0.047 | <0.001 | 0.045 | <0.001 |
Cognitive load | 0.059 | <0.001 | 0.048 | 0.004 | 0.053 | <0.001 |
Age | −0.139 | 0.002 | −0.137 | 0.002 | −0.138 | 0.002 |
Neighbors next door, no | −0.092 | 0.012 | −0.091 | 0.012 | −0.091 | 0.012 |
Arousal * concentrated mental work | 0.040 | <0.001 | ||||
Cognitive load * concentrated mental work | 0.032 | 0.016 | ||||
(Arousal * Cognitive load) * concentrated mental work | 0.031 | <0.001 |
Model . | LPL.bpsp:A . | LPL.bpsp:CL . | LPL.bpsp:A:CL . | |||
---|---|---|---|---|---|---|
Predictors . | β . | p . | β . | p . | β . | p . |
Predicted loudness | 0.447 | <0.001 | 0.446 | <0.001 | 0.447 | <0.001 |
Salient sound preference | −0.042 | <0.001 | −0.043 | <0.001 | −0.042 | <0.001 |
Soundscape pleasantness | −0.237 | <0.001 | −0.235 | <0.001 | −0.237 | <0.001 |
Soundscape eventfulness | 0.260 | <0.001 | 0.263 | <0.001 | 0.260 | <0.001 |
SC Nature | 0.049 | <0.001 | 0.050 | <0.001 | 0.051 | <0.001 |
SC Household | 0.068 | <0.001 | 0.069 | <0.001 | 0.068 | <0.001 |
SC Installation | 0.031 | 0.002 | 0.033 | 0.002 | 0.030 | 0.002 |
SC Signals | 0.041 | <0.001 | 0.042 | <0.001 | 0.041 | <0.001 |
SC Traffic | 0.093 | <0.001 | 0.093 | <0.001 | 0.092 | <0.001 |
SC Speech | 0.090 | <0.001 | 0.091 | <0.001 | 0.090 | <0.001 |
SC Music | 0.102 | <0.001 | 0.102 | <0.001 | 0.102 | <0.001 |
Activity cooking/housework/workout | −0.032 | 0.006 | −0.036 | 0.010 | −0.031 | 0.008 |
Activity concentrated mental work | −0.059 | <0.001 | −0.066 | <0.001 | −0.058 | <0.001 |
Activity social interaction | −0.020 | 0.064 | −0.018 | 0.136 | −0.021 | 0.046 |
Arousal | 0.029 | 0.020 | 0.028 | 0.020 | 0.006 | 0.640 |
Control | 0.045 | <0.001 | 0.047 | <0.001 | 0.045 | <0.001 |
Cognitive load | 0.059 | <0.001 | 0.048 | 0.004 | 0.053 | <0.001 |
Age | −0.139 | 0.002 | −0.137 | 0.002 | −0.138 | 0.002 |
Neighbors next door, no | −0.092 | 0.012 | −0.091 | 0.012 | −0.091 | 0.012 |
Arousal * concentrated mental work | 0.040 | <0.001 | ||||
Cognitive load * concentrated mental work | 0.032 | 0.016 | ||||
(Arousal * Cognitive load) * concentrated mental work | 0.031 | <0.001 |
III. RESULTS
First, the verification of the statistical assumptions, the distributions of the acoustic measures and Perceived loudness, and the findings regarding the individual response patterns are described. Then, the results are presented concerning the three research questions.
All 15 models converged and are listed in Table V together with their model fit parameters, variances, and variance inflation measures. Model assumptions were inspected using Q-Q- and residual scatter plots.1 Normally distributed residuals can be assumed for most of the measured values, as seen in the mid-range of the Q-Q plots. However, the standardized residuals are not scattered randomly around the horizontal line and show patterns with negative trends, indicating a violation of the assumptions for linear regression.
Figure 5 displays the densities and histograms of the dependent variable and the three acoustic predictors. The values for Perceived loudness show a bimodal distribution, while those for the other acoustic predictors roughly correspond to a normal distribution.
(Color online) Density functions as a measure of the probability distribution of the z-standardized predictors of Perceived loudness (a), Predicted loudness LPL (b), LAeq (c), and LAF5 (d) of all 6594 recordings.
(Color online) Density functions as a measure of the probability distribution of the z-standardized predictors of Perceived loudness (a), Predicted loudness LPL (b), LAeq (c), and LAF5 (d) of all 6594 recordings.
The analysis of the individual loudness response curves of the 105 participants revealed distinct answering patterns. In addition to the participants who generated a near-linear relationship between Perceived loudness and Predicted loudness, others used only the verbal Perceived loudness scale (leaving the numerical scale in the preselected middle position), and some used the full range of the Perceived loudness scale even though the Predicted loudness was exceptionally low for all their recordings (refer to SuppPub1.jpg from the supplementary material).1
Regarding RQ1 (Whether auditory loudness models outperform simple acoustic predictors in predicting Perceived loudness), the AIC values of the models based on auditory loudness (LPL.****2, Table V) are the lowest compared to those based on the other two acoustic predictors—LAeq and LAF5—indicating that the auditory loudness models provide the best fit. Conversely, the models based on the LAF5 showed the worst model fit. Therefore, the following results refer to the auditory loudness models.
Concerning RQ2 (To what extent do the three domains influence loudness perception?), the baseline model LPL.b with Predicted loudness as both the sole fixed effect and the sole random effect is first. As shown by Table V, 34% of the variance (R2m) was explained by the (single) fixed effect. In contrast, the same model (including the effects of the random intercept and the random slope) could explain 53% (R2c) of the variance in Perceived loudness ratings. For the model LPL.bp, which represents all predictors from the sound-related domain, the perceptual predictors of the sound ratings still had to be included. As a result, a substantial gain in variance explained was observed, with R2m increasing by 15 to 49% and R2c increasing by 11 to 64%.
Moreover, the non-auditory time-varying situational predictors from the second domain (model LPL.bps) do not contribute to the variance explained concerning R2m or R2c. However, if the situational predictors are replaced by the non-auditory, relatively temporally stable person-related and socio-economic predictors from the third domain (model LPL.bpp), an increase in R2m by 1% and an unchanged R2c can be observed. Finally, if all predictors described previously are considered (model LPL.bpsp), all model fit parameters stay almost unchanged compared to the model with temporally stable but without time-varying predictors (LPL.bpp). The maximum VIF value observed in each model does not exceed 4.0, suggesting that acceptable low multicollinearity, i.e., no overfitting, is present.
Regarding RQ3, Table VI presents the z-standardized regression coefficients (β) of the four more complex models with the smallest AIC values. In general, effects were significant at p < 0.05 for all LPL models discussed with comparable estimates. Concerning the sound-related domain, Predicted loudness showed the largest impact. Two significant perceptual ratings follow it: Lower Perceived loudness is related to higher Soundscape pleasantness, while higher Soundscape eventfulness is associated with higher Perceived loudness. Liking the most salient sound (Salient sound preference) is significantly associated with somewhat decreased Perceived loudness values. From the Soundscape composition, seven of the eight sound source categories showed significant effects on loudness ratings—only human sounds did not relate significantly to loudness perception. Higher loudness ratings were observed with higher saliency for all significant categories, ranging from minor effects from Traffic, Speech, and Music to minimal effects of sounds from Nature and domestic Installation and technical Signals.
The situational domain revealed a few time-varying significant but minimal effects: The strongest one, the (also person-related) Cognitive load, was higher at higher loudness ratings. The same applies to perceived Control and Arousal, which revealed a minimal positive effect on loudness ratings. Contrary to expectations based on previous findings, no significant relation was observed for state affect Valence. Moreover, all situation-related Activity categories showed minimal significant negative effects on loudness ratings compared to the reference activity, which was sleeping and relaxing. In contrast to expectations, no significant relation was observed for the recording time (RT DEN). From the domain of the temporally stable predictors, significantly related to the dependent variable were only the person-related Age and having no Neighbors next door as the single socio-economic predictor, both revealing somewhat lower Perceived loudness values with increasing Age and without neighbors next door.
The computed interaction effects are displayed in Table VII (only for the Activity level of concentrated mental work for readability reasons. Please see the supplementary material for interaction effects with all Activity levels1). When people do mental work instead of sleeping or relaxing, a significant small positive two-way interaction effect of 0.040 is observed for each standard deviation change in Arousal. That is, a change of the negative main effect of Activity concentrated mental work on Perceived loudness occurs in the positive direction (thereby possibly being compensated) when people's Arousal increases or negatively when people experience below-average Arousal. The same interaction effect can also be described while focusing on the small positive main effect of Arousal: It is intensified when people do mental work instead of sleeping or relaxing. Furthermore, the same occurred for the two-way interaction of Cognitive load and Activity concentrated mental work. Finally, a small positive three-way interaction of Activity concentrated mental work, Arousal, and Cognitive load could be observed, substantiating the coexistence of both two-way interaction effects, while the main effect of Arousal becomes non-significant.
IV. DISCUSSION
The present field study based on the Experience Sampling Method conducted in domestic environments aimed to test laboratory-based loudness predictions in complex real-world contexts and to obtain crucial predictors from three domains, the sound-related domain, the time-varying situational domain, and the temporally relatively stable person-related domain, to predict loudness perception in everyday life.
RQ1. Auditory predictors
Results revealed that the energetically averaged loudness level (Kuwano , 2013) based on standardized auditory loudness models according to ISO 532-1 (ISO, 2017a) was a significantly better predictor of perceived loudness than less complex measures, such as LAF5 or LAeq. Since the improvement over both LAeq and LAF5, however, is relatively small, their more cost-effective use in common noise control might be justified.
RQ2. Influence of three domains of predictors
Looking broadly at which of the three domains influence loudness ratings to what extent, it appears as a confirmation of previous research that the predictive power of acoustic measures is limited, explaining only one-third of loudness ratings in everyday situations. For annoyance, which overlaps with loudness to a certain degree but is still a distinct concept (Stallen , 2008), a similar impact was expected from theoretical considerations (Guski, 1999) and found in a field study (Beach , 2012). Both higher (Spilski , 2019) and much lower variance explained was also reported (Michaud , 2016; Bartels , 2015; Spilski , 2019), emphasizing the general importance of context for sound assessments. Sound-related attributes describing the perceived character of the sound environment had a significant additional impact on perceived loudness, indicating the importance of those aspects of the sound field that are not (yet) described by acoustic measurements. Obviously, not only higher constructs but also elementary perceptual items such as the loudness of sound are influenced by cognitive processes related to type, meaning, and predictability (Fastl, 2001; Hellbrück , 2002; Stallen , 2008).
The same is true for attention. Whereas in laboratory studies the participants' full attention is mostly focused on the auditory stimulus, in everyday life, the focus may be taken up by other events and may also change from one moment to the next. This may also apply to background noise, which is fully considered by technical loudness measures, but may not be consciously perceived by people in everyday life (Meunier , 2000).
In this study, the situational, person-related, and socio-economic variables collected in the experiment explain only a small portion of the variance in the loudness ratings, although the related ICCadj values (Table V) indicate that roughly one-third of the differences in loudness ratings has to be attributed to the influence of the participants and their homes. Two aspects could be crucial for this unexplained rest: On the one hand, the influence of expectation, of attention, and, more generally, of the meaning we attribute to sounds, which, even through Experience Sampling, is difficult to capture empirically. One the other hand, these influences also seem to affect the evaluation of sounds and their loudness differently from person to person. This becomes apparent when looking at the difference between the marginal and conditional R2 values. Thus, the explained variance in the model calculated for all influencing variables (sound field, person, situation) increases from 51% to 64% if we allow for an individual interaction of these factors for each person.
Finally, some general limitations of the study might become visible at this point. For example, significant influencing variables could either not have been measured in the experiment or could have had a nonlinear effect on the loudness ratings collected.
RQ3. Influence of individual predictors
The influence of the perceived character of the sound on the loudness ratings falls in roughly equal parts on the two main standardized soundscape factors: pleasantness and eventfulness. Together with the acoustic measures (having the strongest effect), they can explain almost half of the variation in perceived loudness. In this study, participants rated both unpleasant and more eventful sounds as louder. The directions of these effects are essentially consistent with the results of a study on indoor soundscapes conducted by Torresin (2020). They extracted comfort and content (supplemented by the third component familiarity) instead of pleasantness and eventfulness as the two main principal components of the perceived sound in a mockup living room listening test. However, the different positions of the attributes proposed by ISO/TS 12913 (ISO, 2021) based on the work of Axelsson (2010) (e.g., pleasant, eventful, uneventful, chaotic, calm), with deviations from 6 to 38 degrees between the two circumplex models, indicate that the meaning of the two principal components is not identical. Hence, the use of attributes describing comfort and content could further improve the perception-related portion of indoor sound rating predictions in field studies.
As expected, older people reported lower loudness levels, which seems plausible given the age-related increase in hearing loss, at least for sounds that are not at the upper limit of the acoustic dynamic range, where hearing loss can lead to disproportionate sensitivity and loudness ratings (recruitment phenomenon). This may also explain why the assessed hearing impairment, defined in this study by participants' hearing threshold, did not show a significant main effect.
Similarly, the person-related noise sensitivity had no significant direct relation with loudness perception investigated in this study, which is in line with the inconsistent findings regarding this predictor in the literature (Miedema and Vos, 2003; Kroesen , 2008; Abbasi , 2021). It also coheres with a conclusion by Job (1999) that “noise sensitivity predicts objectively measured physiological reactivity to noise rather than only the introspective judgment of the effects of noise (reaction).” Moreover, recent research on indoor soundscapes suggests that noise sensitivity affects the perceived comfort of the acoustic environment only for specific combinations of activities and sound sources (Torresin , 2022), corroborating the results of this study. These findings also suggest that no generalizable association exists between noise sensitivity and perceived loudness.
Although some time-varying situational predictors were significant in the statistical model, their influence was negligible. Moreover, they did not increase the total variance explained (see the R2c values in Table V), suggesting that they drain predictive power from the sound- and person-related variables. Nevertheless, all activities were associated with lower-loudness perception than in the sleeping and relaxing condition. Physical and non-physical activities, e.g., exercise or concentrated mental work, seem to draw attention away from the environment's noise level and shield against auditory distractions (Sörqvist , 2016) leading to lower loudness perception. This mechanism works provided the cognitive load in the mental work is not too high. However, the shielding no longer seems effective in situations with high cognitive load and high arousal which leads to higher loudness perception, possibly from a point of leaving a state of “flow” (Csikszentmihalyi, 2014).
The results of this study did not confirm previous findings regarding a loudness attenuating influence of a positive affective state (Siegel and Stefanucci, 2011; Asutay and Västfjäll, 2012). Here, again, a difference between laboratory and field studies might become apparent, since in the latter not only attention is less exclusively focused on the stimulus, but also—especially in the home environment—there might be a much lower variance of mood states compared to what can be controllably achieved in laboratory studies.
Finally, in the domestic sound environments investigated, the perceived control over the real-world situation seems to play only a small role in loudness perception, in contrast to studies on aircraft noise which found that perceived control was a powerful predictor of different consequences of noise exposure (Hatfield , 2002). Thus, one's own control seems to compensate to some extent for annoyance (Torresin , 2022; Schreckenberg , 2018; Kroesen , 2008), but not for the evaluation of loudness, especially in the case of sounds that are consciously induced by the participants in the domestic environment, for example by persons listening to their own music.
V. CONCLUSION
In a field study collecting ratings on 6594 sound environments by 105 participants in their homes using the Experience Sampling Method, only one-third of the variance of perceived loudness could be explained by measured auditory loudness level. Perceived loudness was best predicted by the loudness level LPL based on ISO 532-1 (ISO, 2017a), closely followed by the A-weighted equivalent continuous sound pressure level LAeq and the A-weighted five-percent exceedance level LAF5, both of which are computationally less expensive, which may justify their use in everyday applications.
The explanation of perceived loudness could be significantly improved to about 50% by considering the sound character rated by the participants with soundscape attributes according to ISO/TS 12913-3 (ISO, 2021). Whereas high soundscape pleasantness and high preference of the most salient sound led to lower perceived loudness, high eventfulness led to higher perceived loudness.
Non-auditory, situational, person-related, and socio-economic influences played only a minor role in this study. Both physical and non-physical activities like doing concentrated mental work seem to draw attention away from the noise environment and lead to lower loudness perception, as long as the cognitive load in mental work is not too high, which makes people again vulnerable and leads to higher loudness perception. In general, however, perceived loudness seems to be less susceptible to non-auditory influences than annoyance.
Interestingly, some influences known from laboratory studies could not be identified in the field study conducted. This applies to the influence of the affective state of the participants and the perceived control over the sound environment. From our point of view, this underlines the value of field studies, in which a more natural manifestation of the investigated variables can be found. The Experience Sampling Method proved to be both powerful and easily manageable for such field studies and could be applied in the future to other domains than the domestic living environment studied here.
ACKNOWLEDGEMENTS
The authors thank Patrick Blättermann for his advice and constructive discussion of the study design, the statistical analysis, and the manuscript. Furthermore, thanks go to Fabian Rosenthal for calculating the acoustic predictors, assisting with data import, and discussing and further programming the questionnaires. Jenny Winter was responsible for the help texts, subject acquisition, and scheduling of subject introductions. The latter was carried out by Jan Roloff, Jenny Winter, Fabian Rosenthal, and S.V. Particular thanks go to Benjamin Müller, who co-developed the recording device and built the prototype as well as a 10-part small series. Likewise, special thanks go to Christian Epe for expert advice on the low-noise circuit design. The authors thank the three anonymous reviewers for their intensive reading of the manuscript and for their helpful suggestions. We finally thank the German Federal Ministry of Education and Research for funding this study. “FHprofUnt”-Funding Code: 13FH729IX6.
AUTHOR DECLARATIONS
Conflict of Interest
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethics Approval
The studies involving human participants were reviewed and approved by the Ethics Committee of the Medical Faculty of the University of Duisburg-Essen, Germany. All participants provided digital written informed consent by confirming the declaration on data collection and processing before participating.
Author Contributions
S.V. designed and conducted the study, performed the statistical analysis, interpreted the data, wrote the initial manuscript, made revisions, and preprocessed and published the dataset. J.S. and S.W. contributed to the research questions, the design, and the statistical analysis. All authors reviewed the manuscript and approved the final version of the article.
DATA AVAILABILITY
The data of this study is openly available at https://doi.org/10.5281/zenodo.7858848. Real-time audio recordings cannot be made publicly available for privacy reasons. Please contact the authors.
See supplementary material at https://doi/org/10.1121/10.0019413 for three individual loudness responses (file SuppPub1.jpg); all models (including model with interaction effects) along with their confidence intervals and VIF values for each estimate, and Q-Q and scatter plots (file SuppPub2.pdf); the Experience Sampling Method questionnaire with an English translation (file SuppPub3.pdf); the original questionnaire for the person-related and socio-economic predictors in German (file SuppPub4.pdf); and a description of all variables assessed including the English translations for the person-related and socio-economic questionnaire (file SuppPub5.pdf).
**** acts as a placeholder for b, bp, bps, bpp, and bpsp.