Cough is a common symptom presenting in asthmatic children. In this investigation, an audio-based classification model is presented that can differentiate between healthy and asthmatic children, based on the combination of cough and vocalised /ɑ:/ sounds. A Gaussian mixture model using mel-frequency cepstral coefficients and constant-Q cepstral coefficients was trained. When comparing the predicted labels with the clinician's diagnosis, this cough sound model reaches an overall accuracy of 95.3%. The vocalised /ɑ:/ model reaches an accuracy of 72.2%, which is still significant because the dataset contains only 333 /ɑ:/ sounds versus 2029 cough sounds.

Asthma typically evokes symptoms such as cough, wheeze, and dyspnea. Its diagnosis is supported by physical signs of wheezing upon auscultation. While morbidity associated with childhood asthma is decreasing due to the progress in medical care, perioperative bronchospasm and acute asthmatic episodes can still precipitate life-threatening adverse events (Fernandez-Bustamante et al., 2017; Jesenak et al., 2009; Numata et al., 2018; Todokoro et al., 2003). In this study, following our earlier report (Hee et al., 2019), we built a system to automatically screen patients to assist clinicians' decision-making when cough presents as a symptom. Although unrelated to asthma, cough is also a primary symptom of many other respiratory infections such as COVID-19 (alongside fever and fatigue), and therefore potentially shares acoustic information in a manner similar to that reported here.

Given the importance of forming accurate diagnoses based on cough sounds (Amrulloh et al., 2015; Chang, 1999; Infante et al., 2017; Todokoro et al., 2003) [even normal healthy children can exhibit cough epochs (Chang, 2005)], there has been some research on creating automatic classification models to characterize and differentiate various lung diseases. Murata et al. (1998) incorporated the use of time expanded waveforms combined with spectrograms to differentiate between productive (i.e., cough caused by excess airway secretions) and non-productive coughs. Abaza et al. (2009) developed a system for detecting abnormal lung function using a combination of the airflow characteristics and acoustic properties of voluntary coughs. Cough sound analysis has also been reported to rapidly diagnose pneumonia (Abeyratne et al., 2013). The strategies employed in most of the above-mentioned studies are largely based on cough sounds carefully recorded in an acoustically “clean” environment, whereas the system presented in our paper is built and evaluated using cough sounds recorded on smart phones in a “dirty” live hospital setting. We therefore expect that our resulting system will be rather more relevant and representative for clinicians “in the field.”

In this paper, we present a novel cough sound and vocalised /ɑ:/ sound dataset, and a machine learning model that accurately differentiates between asthmatic and non-asthmatic (later referred to as healthy) children. This classification model is based on acoustic features which were extracted from both the cough sounds and the vocalised /ɑ:/ sounds (i.e., the long “a” vowel, as in “Father”). These sounds are useful for classification since each respiratory pathology has its own spectral cough characteristics resulting from changes in airway dimension, patency, and secretion. The accuracy of our proposed method is evaluated by comparing results of the model with the clinician's diagnosis. To the authors' knowledge, this is the first study of its kind to use both cough and vocalised /ɑ:/ sounds collected in an ecological manner inherent to the clinical setting.

Asthmatic children were recruited from the Children's Emergency Department, Respiratory Ward, and Respiratory Clinic at KK Women's and Children's Hospital, Singapore (resulting in a more severe clinical presentations). Children from the healthy group (i.e., non-asthmatic) were recruited from the Children Surgical Unit of the same hospital. Mean age of the children was 8 years with a standard deviation of 3 years. From February 2017 until April 2018, a total of 2029 cough sounds were collected: 997 from asthmatic children and 1032 from healthy children. Additionally, 333 vocalised /ɑ:/ sounds were recorded: 80 from asthmatic children and 253 from healthy children. (See Table 1 for details.)

Table 1.

Number of entries of the cough and /ɑ:/ sounds.

CoughNumber of childrenNumber of coughs
TrainingTestTrainingTest
Asthma 51 20 613 384 
Voluntary 90 45 661 371 
CoughNumber of childrenNumber of coughs
TrainingTestTrainingTest
Asthma 51 20 613 384 
Voluntary 90 45 661 371 
/ɑ:/Number of childrenNumber of /ɑ:/ sounds
TrainingTestTrainingTest
Asthma 51 20 56 24 
Voluntary 90 45 135 118 
/ɑ:/Number of childrenNumber of /ɑ:/ sounds
TrainingTestTrainingTest
Asthma 51 20 56 24 
Voluntary 90 45 135 118 

A smartphone was used to record both the vocalised /ɑ:/ sounds and cough sounds, of both asthmatic and healthy children. For the latter group, the children were instructed to actively cough (as they did not present any symptom). The recordings were done in an ecological setting, i.e., a hospital with typical background ward noise such as talking, ambulance sirens, machine noises, etc. During the recording session, children were asked to cough actively, which often resulted in multiple cough sounds per child (on average 10 to 12 per child). These cough recordings were then subsequently segmented manually to form separate entries in the dataset. For the recording of the vocalised /ɑ:/ sounds, a similar procedure was followed, but yielding an average of only 1 to 2 entries per child.

In clinical practice, doctors typically ask patients to vocalise /ɑ:/ (“aah”) during medical examination, a method that has evolved over centuries of clinical practice. This strategic articulatory gesture is chosen because it makes the patient lower the mid/back of the tongue and simultaneously widen the jaw opening (and also pulling back the lips) for easy visual access to the back of the mouth (e.g., to inspect the tonsils or condition of the patient's tongue, etc.). Acoustically, such vocalisation carries information associated with physiological changes to the vocal folds, vocal tract, sub-glottal tract, and associated regions: it indicates if there are mechano-acoustic changes associated with any swelling, inflammation, or increased mucus of the ear, nose, and throat tissues, and narrowing of pathways associated with speech, swallowing, and respiration. It may be further noted that when the vocal folds are engaged (i.e., oscillating) during phonation, the acoustic energy generated not only propagates “downstream” (with respect to the DC flow of exhaled air) towards the open lips (resulting as speech sounds), but also propagates “upstream” into the trachea and lungs. While the lungs are acoustically somewhat lossy at certain frequencies, a portion of the acoustic energy will still back-propagate out through the glottis, contributing to a different vocal quality if the upstream physiology has changed—indicating acoustic changes in the voice associated with different respiratory conditions (Hanna et al., 2018). In our approach, it is this sum-total effect of the respiratory disorder (and its changes to respiratory physiology) on the vocal quality which we are investigating when considering the vocalised /ɑ:/ sound here.

The cough and vocalised /ɑ:/ sounds in the dataset were first preprocessed: detrending, normalizing, and downsampling to 11.025 kHz from the original sampling rate of 44.1 kHz. Then two different sets of audio features, namely, mel-frequency cepstral coefficients (MFCCs) and constant-Q cepstral coefficients (CQCCs) were extracted—both features focus on the perceptually relevant aspects of the audio spectrum and have shown to be particularly effective in audio classification (Balamurali et al., 2019; Muda et al., 2010). MFCCs are the most commonly used audio features in speech and speaker recognition (Rabiner and Juang, 1993; Rabiner and Schafer, 2011). CQCCs are recently developed audio features for the purpose of tackling spoofing in automatic speaker verification systems. They have been shown to outperform all other audio features in detecting attacks such as voice mimicking (Balamurali et al., 2019; Todisco et al., 2017). Our current study has relevance to automatic speaker verification, as healthy children were instructed to cough voluntarily, and thus mimic a sick person. We therefore hypothesise that CQCCs should be more sensitive to such mimicking. A total of 42 features (14 MFCCs, 14 deltas, and 14 delta-deltas) were extracted from hamming windowed audio frames 100 ms long (Balamurali et al., 2014). In the case of CQCCs, a total of 60 features (20 CQCCs, their deltas, and delta-deltas) were extracted from 12.5 ms long frames.

3.2.1 Gaussian mixture model-universal background model (GMM-UBM)

Features extracted from both cough and vocalised /ɑ:/ sounds were modelled using two separate GMM-UBMs. Here, a single background model, often referred to as a universal background model (UBM), was created using feature data pooled across both classes (healthy and asthmatic). The probability density function for the UBM is modelled using a Gaussian mixture model (GMM), where an optimal fit for the data is found using the expectation maximization algorithm. The healthy model and asthmatic model are then created from this UBM by adapting the background model towards a better fit for the healthy sounds and asthmatic sounds, respectively. Both adaptations are achieved via a maximum A posterior (MAP) procedure (Reynolds et al., 2000).

3.2.2 Likelihood ratio

Mathematically, the likelihood ratio (LR) is defined as the ratio of two conditional probabilities (Aitken and Taroni, 2004). In the context of this research, the LR framework provides a quantitative estimate of which group a child belongs to

LR=p(E/HHealthy)p(E/HAsthmatic),
(1)

whereby p(E/Hhealthy) computes the conditional probability of E (the evidence) given the hypothesis (H) that a sound is from a healthy child, whereas p(E/HAsthmatic) calculates the probability of evidence given the hypothesis (H) that sound sample is from an asthmatic child. These conditional probabilities are calculated by modelling the audio features using the GMM-UBM described above. From Eq. (1), it is clear that LR > 1 supports the healthy hypothesis, whereas LR < 1 supports the asthmatic hypothesis. LR1 provides little support for either hypothesis. Here, we further compute the log-likelihood-ratio (LLR) from the LR, as LLR=log10(LR). The sign of the LLR indicates whether our model predicts the sound to be healthy (positive LLR) or asthmatic (negative LLR). Its magnitude indicates the strength of that support (Morrison, 2009; Rose, 2003).

The datasets for both vocalised /ɑ:/ and cough sounds were split into two non-overlapping parts: a training and test set. We ensured that cough sounds belonging to the same child were either in the test or the training set, but not both. A standard split ratio of 70:30 for the training and test set was used. The resulting dataset sizes are shown in Table 1 for the cough and vocalised /ɑ:/ sounds.

We first trained the models using labeled sounds, then proceeded to test the models using a separate test set. During the training phase, each cough and /ɑ:/ sound was first segmented into frames of 12.5 and 100 ms long (depending on which features to be extracted). From each frame, the CQCCs and MFCCs were then extracted and used to train the GMM-UBM classifier.

During the testing phase, CQCC and MFCC features of new cough sounds and vocalised /ɑ:/ sounds were extracted and used to calculate the LLR. In cases where multiple sounds were available per child, the different LLR corresponding to sounds from that child were averaged to produce the overall accuracy for that particular sound feature. This score was then used to predict if a child belongs to the healthy or asthmatic class.

4.2.1 Tippett plots

The cumulative proportion of both LLR values for healthy and asthmatic cough sounds are shown using Tippett plots (Meuwly and Drygajlo, 2001). The dotted and solid curves represent the LLR for healthy and asthmatic cough sounds, respectively. Since positive LLR results in a healthy prediction and negative LLR in an asthmatic prediction, these two curves should ideally be far apart, i.e., dotted curve towards the right and solid curve to the left.

4.2.2 Accuracy

The performance accuracy of the model is calculated per class, so as to diminish the influence of class imbalances, and is estimated by comparing the predicted outputs with the actual outputs for each class (GoogleDevelopers, 2018; Zhang et al., 2019).

Accuracy=NumberofcorrectpredictionsforaclassTotalnumberofpredictionsintheclass.
(2)

In order to understand if vocalised /ɑ:/ sound helps to differentiate healthy from asthmatic children, formants were extracted from vocalised /ɑ:/ sounds for some of these children from various age groups (see Fig. 1). The number of children is same in each age group for both healthy and asthmatic cases (varying from 6 to 14 children). Interestingly, with the exception of the second formant in age group 7 and 10, the median frequency for asthmatic formants (F1, F2, and F3) was found to be lower than in healthy cases. However, this is not observed for age group 11, thus cautioning against any generalization; the healthy and asthmatic sounds are not taken from the same child.

Fig. 1.

(Color online) Distribution of speech formants (F1, F2, F3) for children from various age groups.

Fig. 1.

(Color online) Distribution of speech formants (F1, F2, F3) for children from various age groups.

Close modal

Table 2 shows the results in terms of class-specific prediction accuracy and overall accuracy (averaged) for cough samples and vocalised /ɑ:/ sounds, respectively. Rows 1 and 4 show results for the models built using MFCC features, whole rows 2 and 5 show results for models trained by CQCCs. The third and sixth rows show the fused model, obtained by averaging the LLRs from both MFCCs and CQCCs-derived models.

Table 2.

Experimental results for cough and /ɑ:/ sound models.

Cough; /ɑ:/ soundsHealthy children correctly classified (%)Asthmatic children correctly classified (%)Overall accuracy (%)
MFCCs (cough) 95.6 80.0 87.7 
CQCCs (cough) 82.2 90.0 86.1 
Fused model (cough) 95.6 95.0 95.3 
MFCCs (/ɑ:/) 62.2 85.0 73.6 
CQCCs (/ɑ:/) 73.3 65.0 69.2 
Fused model (/ɑ:/) 64.4 80.0 72.2 
Fused model (cough and /ɑ:/) 88.9 95.0 91.9 
Cough; /ɑ:/ soundsHealthy children correctly classified (%)Asthmatic children correctly classified (%)Overall accuracy (%)
MFCCs (cough) 95.6 80.0 87.7 
CQCCs (cough) 82.2 90.0 86.1 
Fused model (cough) 95.6 95.0 95.3 
MFCCs (/ɑ:/) 62.2 85.0 73.6 
CQCCs (/ɑ:/) 73.3 65.0 69.2 
Fused model (/ɑ:/) 64.4 80.0 72.2 
Fused model (cough and /ɑ:/) 88.9 95.0 91.9 

Our cough sound models achieve high accuracy when classifying between asthmatic and healthy children. The fused cough model performs best, with accuracy surpassing 95% (having a sensitivity of 95.6% and specificity of 95.0%). This is impressive, considering the ecological recording circumstances of the data collection (i.e., “dirty” background clinical noise present and recording audio on a smartphone).

It can also be observed from Table 2 that, overall, the classification accuracy for models trained using cough sounds is better than those trained on vocalised /ɑ:/ sounds alone and the fused model with both cough and /ɑ:/ sounds, the latter two cases achieving a fused accuracy of 72.2% and 91.9%, respectively. The fused model trained on vocalised /ɑ:/ sounds has resulted in a sensitivity of 82.2% and specificity of 70.0% whereas the fused model with both cough and /ɑ:/ sound has a sensitivity of 91.1% and specificity of 95.0%. The better performance here can be attributed to the availability of a larger number of cough sounds per child, versus only one or two /ɑ:/ sound in the dataset; the number of available cough samples is almost ten times higher than /ɑ:/ sounds for asthmatic cases and five times higher for healthy cases (Table 1). This may suggest that more samples would have improved the performance of the /ɑ:/ sound models and this needs further investigation. The slightly inferior performance of /ɑ:/ sound model marginally affects the performance of the fused model with both cough and /ɑ:/ sounds and this especially visible in the healthy children correctly classified accuracy.

The Tippett plots corresponding to the fused result for the models based on cough and vocalised /ɑ:/ sounds are shown in Fig. 2. The Tippet plot corresponding to the fused model with both cough and /ɑ:/ sounds appears very similar to that of the fused result for the models based on cough and has not been shown. The relative symmetry of the plot around the LLR = 0 line in the top figure indicates that the cough sound model performs better (here, LLR = 0 forms the threshold when predicting a class). Further, the crossover point between the healthy curve and the asthmatic curve was found to be slightly lower when using cough sounds versus vocalised /ɑ:/ sounds, reflecting a lower misclassification rate.

Fig. 2.

Tippett plots of fused models. Trained using the cough sounds (top) and trained using the vocalised /ɑ:/ sounds (bottom).

Fig. 2.

Tippett plots of fused models. Trained using the cough sounds (top) and trained using the vocalised /ɑ:/ sounds (bottom).

Close modal

We gathered a unique dataset of cough sounds and vocalised /ɑ:/ sounds for both healthy and asthmatic children. A fused GMM-UBM model with MFCC and CQCC features reached a classification accuracy of over 95% when using cough sounds to screen between healthy and asthmatic kids. This accuracy is particularly impressive given the ecological recording setting during data collection. The resulting model is useful to assist diagnostic screening and inform clinicians when making decisions; any indication of a potentially missed asthmatic diagnosis would be a useful flag. The models based on vocalised /ɑ:/ sounds had a lower accuracy (72.2%), although it should be noted that there was roughly ten times less data available for this category. This strategy of collecting both cough sounds and vocalised sound is novel and can have profound implications when examining symptomatic cough sounds associated with other diseases, such as COVID-19 (whereby cough is a primary symptom, alongside fever and fatigue).

In the future, it would therefore be worthwhile examining if the accuracy of /ɑ:/ sound models could be further improved by training with more data, and also training using other target vocalised sounds. We also hope to collect more data so that we can further improve performance by using deep learning approaches such as convolutional neural network/long short term memory which are more data intensive.

The study was conducted under Singhealth IRB No. 2016/2416, ClinialTrials.gov No. NCT03169699 and funded by SMART No. ING000091-ICT. We thank Simon Lui for the initial idea to collect vocalised sounds, Ariv K. for helping with audio segmentation, Teng S. S., Dianna Sri Dewi, and Foo Chuan Ping for coordinating the recruitment of patients and research project administration.

1.
Abaza
,
A. A.
,
Day
,
J. B.
,
Reynolds
,
J. S.
,
Mahmoud
,
A. M.
,
Goldsmith
,
W. T.
,
McKinney
,
W. G.
,
Petsonk
,
E. L.
, and
Frazer
,
D. G.
(
2009
). “
Classification of voluntary cough sound and airflow patterns for detecting abnormal pulmonary function
,”
Cough
5
(
1
),
8
.
2.
Abeyratne
,
U. R.
,
Swarnkar
,
V.
,
Setyati
,
A.
, and
Triasih
,
R.
(
2013
). “
Cough sound analysis can rapidly diagnose childhood pneumonia
,”
Ann. Biomed. Eng.
41
(
11
),
2448
2462
.
3.
Aitken
,
C. G.
, and
Taroni
,
F.
(
2004
).
Statistics and the Evaluation of Evidence for Forensic Scientists
(
Wiley Online Library
,
New York
), Vol.
16
.
4.
Amrulloh
,
Y.
,
Abeyratne
,
U.
,
Swarnkar
,
V.
, and
Triasih
,
R.
(
2015
). “
Cough sound analysis for pneumonia and asthma classification in pediatric population
,” in
2015 6th International Conference on Intelligent Systems, Modelling and Simulation
, IEEE, pp.
127
131
.
5.
Balamurali
,
B.
,
Alzqhoul
,
E. A.
, and
Guillemin
,
B. J.
(
2014
). “
Comparison between mel-frequency and complex cepstral coefficients for forensic voice comparison using a likelihood ratio framework
,” in
Proceedings of the World Congress on Engineering and Computer Science
,
San Francisco, USA
.
6.
Balamurali
,
B. T.
,
Lin
,
K. E.
,
Lui
,
S.
,
Chen
,
J.
, and
Herremans
,
D.
(
2019
). “
Toward robust audio spoofing detection: A detailed comparison of traditional and learned features
,”
IEEE Access
7
,
84229
84241
.
7.
Chang
,
A.
(
2005
). “
Cough: Are children really different to adults?
,”
Cough
1
(
1
),
7
.
8.
Chang
,
A. B.
(
1999
). “
Cough, cough receptors, and asthma in children
,”
Pediatr. Pulm.
28
(
1
),
59
70
.
9.
Fernandez-Bustamante
,
A.
,
Frendl
,
G.
,
Sprung
,
J.
,
Kor
,
D. J.
,
Subramaniam
,
B.
,
Ruiz
,
R. M.
,
Lee
,
J.-W.
,
Henderson
,
W. G.
,
Moss
,
A.
,
Mehdiratta
,
N.
,
Colwell
,
M. M.
,
Bartels
,
K.
,
Kolodzie
,
K.
,
Giquel
,
J.
, and
Melo
,
M. F. V.
(
2017
). “
Postoperative pulmonary complications, early mortality, and hospital stay following noncardiothoracic surgery: A multicenter study by the perioperative research network investigators
,”
JAMA Surg.
152
(
2
),
157
166
.
10.
GoogleDevelopers
(
2018
). “
A self-study guide for aspiring machine learning practitioners
,” https://developers.google.com/machine-learning/crash-course/ (Last viewed 6/15/2018).
11.
Hanna
,
N.
,
Smith
,
J.
, and
Wolfe
,
J.
(
2018
). “
How the acoustic resonances of the subglottal tract affect the impedance spectrum measured through the lips
,”
J. Acoust. Soc. Am.
143
(
5
),
2639
2650
.
12.
Hee
,
H. I.
,
Balamurali
,
B.
,
Karunakaran
,
A.
,
Herremans
,
D.
,
Teoh
,
O. H.
,
Lee
,
K. P.
,
Teng
,
S. S.
,
Lui
,
S.
, and
Chen
,
J. M.
(
2019
). “
Development of machine learning for asthmatic and healthy voluntary cough sounds: A proof of concept study
,”
Appl. Sci.
9
(
14
),
2833
.
13.
Infante
,
C.
,
Chamberlain
,
D. B.
,
Kodgule
,
R.
, and
Fletcher
,
R. R.
(
2017
). “
Classification of voluntary coughs applied to the screening of respiratory disease
,” in
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
, IEEE, pp.
1413
1416
.
14.
Jesenak
,
M.
,
Babusikova
,
E.
,
Petrikova
,
M.
,
Turcan
,
T.
,
Rennerova
,
Z.
,
Michnova
,
Z.
,
Havlicekova
,
Z.
,
Villa
,
M.
, and
Banovcin
,
P.
(
2009
). “
Cough reflex sensitivity in various phenotypes of childhood asthma
,”
J. Physiol. Pharmacol.
60
,
61
65
.
15.
Meuwly
,
D.
, and
Drygajlo
,
A.
(
2001
). “
Forensic speaker recognition based on a Bayesian framework and Gaussian mixture modelling (GMM)
,” in
2001: A Speaker Odyssey—The Speaker Recognition Workshop
.
16.
Morrison
,
G. S.
(
2009
). “
Forensic voice comparison and the paradigm shift
,”
Sci. Justice
49
(
4
),
298
308
.
17.
Muda
,
L.
,
Begam
,
M.
, and
Elamvazuthi
,
I.
(
2010
). “
Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques
,” arXiv:1003.4083.
18.
Murata
,
A.
,
Taniguchi
,
Y.
,
Hashimoto
,
Y.
,
Kaneko
,
Y.
,
Takasaki
,
Y.
, and
Kudoh
,
S.
(
1998
). “
Discrimination of productive and non-productive cough by sound analysis
,”
Intern. Med.
37
(
9
),
732
735
.
19.
Numata
,
T.
,
Nakayama
,
K.
,
Fujii
,
S.
,
Yumino
,
Y.
,
Saito
,
N.
,
Yoshida
,
M.
,
Kurita
,
Y.
,
Kobayashi
,
K.
,
Ito
,
S.
,
Utsumi
,
H.
,
Yanagisawa
,
H.
,
Hashimoto
,
M.
,
Wakui
,
H.
,
Minagawa
,
S.
,
Ishikawa
,
T.
,
Hara
,
H.
,
Araya
,
J.
,
Kaneko
,
Y.
, and
Kuwano
,
K.
(
2018
). “
Risk factors of postoperative pulmonary complications in patients with asthma and COPD
,”
BMC Pulm. Med.
18
(
1
),
4
.
20.
Rabiner
,
L. R.
, and
Juang
,
B.-H.
(
1993
).
Fundamentals of Speech Recognition
(
PTR Prentice Hall
,
Englewood Cliffs, NJ)
, Vol.
14
.
21.
Rabiner
,
L. R.
, and
Schafer
,
R. W.
(
2011
).
Theory and Applications of Digital Speech Processing
(
Pearson Upper Saddle River
,
NJ
), Vol.
64
.
22.
Reynolds
,
D. A.
,
Quatieri
,
T. F.
, and
Dunn
,
R. B.
(
2000
). “
Speaker verification using adapted Gaussian mixture models
,”
Dig. Sign. Process.
10
(
1–3
),
19
41
.
23.
Rose
,
P.
(
2003
).
Forensic Speaker Identification
(
CRC Press
,
Boca Raton, FL
).
24.
Todisco
,
M.
,
Delgado
,
H.
, and
Evans
,
N.
(
2017
). “
Constant q cepstral coefficients: A spoofing countermeasure for automatic speaker verification
,”
Comput. Speech Lang.
45
,
516
535
.
25.
Todokoro
,
M.
,
Mochizuki
,
H.
,
Tokuyama
,
K.
, and
Morikawa
,
A.
(
2003
). “
Childhood cough variant asthma and its relationship to classic asthma
,”
Ann. Allergy Asthma Immunol.
90
(
6
),
652
659
.
26.
Zhang
,
A.
,
Lipton
,
Z. C.
,
Li
,
M.
, and
Smola
,
A. J.
(
2019
). “
Dive into deep learning
,” https://d2l.ai/ (Last viewed June 19, 2020).