Dysarthria detection based on a deep learning model with a clinically-interpretable layer

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Studies have shown deep neural networks (DNN) as a potential tool for classifying dysarthric speakers and controls. However, representations used to train DNNs are largely not clinically interpretable, which limits clinical value. Here, a model with a bottleneck layer is trained to jointly learn a classification label and four clinically-interpretable features. Evaluation of two dysarthria subtypes shows that the proposed method can flexibly trade-off between improved classification accuracy and discovery of clinically-interpretable deficit patterns. The analysis using Shapley additive explanation shows the model learns a representation consistent with the disturbances that define the two dysarthria subtypes considered in this work.

Original languageEnglish (US)
Article number015201
JournalJASA Express Letters
Volume3
Issue number1
DOIs
StatePublished - Jan 1 2023

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Music
  • Arts and Humanities (miscellaneous)

Fingerprint

Dive into the research topics of 'Dysarthria detection based on a deep learning model with a clinically-interpretable layer'. Together they form a unique fingerprint.

Cite this