TY - JOUR
T1 - Robust Estimation of Hypernasality in Dysarthria with Acoustic Model Likelihood Features
AU - Saxon, Michael
AU - Tripathi, Ayush
AU - Jiao, Yishan
AU - Liss, Julie M.
AU - Berisha, Visar
N1 - Funding Information:
Manuscript received January 29, 2020; revised June 14, 2020 and August 2, 2020; accepted August 4, 2020. Date of publication August 7, 2020; date of current version September 3, 2020. This work was supported by National Institutes of Health under Grant 5R01DC006859. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Hsin-min Wang. (Corresponding author: Michael Saxon.) The authors are with the School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, 85281 AZ USA (e-mail: mssaxon@asu.edu; ayushtripathi1811@gmail.com; jiaoyishan@gmail.com; julie.liss@asu.edu; visar@asu.edu). Digital Object Identifier 10.1109/TASLP.2020.3015035
Publisher Copyright:
© 2014 IEEE.
PY - 2020
Y1 - 2020
N2 - Hypernasality is a common characteristic symptom across many motor-speech disorders. For voiced sounds, hypernasality introduces an additional resonance in the lower frequencies and, for unvoiced sounds, there is reduced articulatory precision due to air escaping through the nasal cavity. However, the acoustic manifestation of these symptoms is highly variable, making hypernasality estimation very challenging, both for human specialists and automated systems. Previous work in this area relies on either engineered features based on statistical signal processing or machine learning models trained on clinical ratings. Engineered features often fail to capture the complex acoustic patterns associated with hypernasality, whereas metrics based on machine learning are prone to overfitting to the small disease-specific speech datasets on which they are trained. Here we propose a new set of acoustic features that capture these complementary dimensions. The features are based on two acoustic models trained on a large corpus of healthy speech. The first acoustic model aims to measure nasal resonance from voiced sounds, whereas the second acoustic model aims to measure articulatory imprecision from unvoiced sounds. To demonstrate that the features derived from these acoustic models are specific to hypernasal speech, we evaluate them across different dysarthria corpora. Our results show that the features generalize even when training on hypernasal speech from one disease and evaluating on hypernasal speech from another disease (e.g., training on Parkinson's disease, evaluation on Huntington's disease), and when training on neurologically disordered speech but evaluating on cleft palate speech.
AB - Hypernasality is a common characteristic symptom across many motor-speech disorders. For voiced sounds, hypernasality introduces an additional resonance in the lower frequencies and, for unvoiced sounds, there is reduced articulatory precision due to air escaping through the nasal cavity. However, the acoustic manifestation of these symptoms is highly variable, making hypernasality estimation very challenging, both for human specialists and automated systems. Previous work in this area relies on either engineered features based on statistical signal processing or machine learning models trained on clinical ratings. Engineered features often fail to capture the complex acoustic patterns associated with hypernasality, whereas metrics based on machine learning are prone to overfitting to the small disease-specific speech datasets on which they are trained. Here we propose a new set of acoustic features that capture these complementary dimensions. The features are based on two acoustic models trained on a large corpus of healthy speech. The first acoustic model aims to measure nasal resonance from voiced sounds, whereas the second acoustic model aims to measure articulatory imprecision from unvoiced sounds. To demonstrate that the features derived from these acoustic models are specific to hypernasal speech, we evaluate them across different dysarthria corpora. Our results show that the features generalize even when training on hypernasal speech from one disease and evaluating on hypernasal speech from another disease (e.g., training on Parkinson's disease, evaluation on Huntington's disease), and when training on neurologically disordered speech but evaluating on cleft palate speech.
KW - Clinical speech analytics
KW - dysarthria
KW - hypernasality
KW - speech features
KW - velopharyngeal dysfunction
UR - http://www.scopus.com/inward/record.url?scp=85091082839&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091082839&partnerID=8YFLogxK
U2 - 10.1109/TASLP.2020.3015035
DO - 10.1109/TASLP.2020.3015035
M3 - Article
AN - SCOPUS:85091082839
SN - 2329-9290
VL - 28
SP - 2511
EP - 2522
JO - IEEE/ACM Transactions on Audio Speech and Language Processing
JF - IEEE/ACM Transactions on Audio Speech and Language Processing
M1 - 9162481
ER -