Learn2Sign: Explainable AI for sign language learning

Prajwal Paudyal, Junghyo Lee, Azamat Kamzin, Mohamad Soudki, Ayan Banerjee, Sandeep Gupta

Research output: Contribution to journalConference articlepeer-review

17 Scopus citations


Languages are best learned in immersive environments with rich feedback. This is specially true for signed languages due to their visual and poly-componential nature. Computer Aided Language Learning (CALL) solutions successfully incorporate feedback for spoken languages, but no such solution exists for signed languages. Current Sign Language Recognition (SLR) systems are not interpretable and hence not applicable to provide feedback to learners. In this work, we propose a modular and explainable machine learning system that is able to provide fine-grained feedback on location, movement and hand-shape to learners of ASL. In addition, we also propose a waterfall architecture for combining the sub-modules to prevent cognitive overload for learners and to reduce computation time for feedback. The system has an overall test accuracy of 87.9 % on real-world data consisting of 25 signs with 3 repetitions each from 100 learners.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
StatePublished - Jan 1 2019
Event2019 Joint ACM IUI Workshops, ACMIUI-WS 2019 - Los Angeles, United States
Duration: Mar 20 2019 → …


  • Computer-aided learning
  • Explainable AI
  • Sign language learning

ASJC Scopus subject areas

  • General Computer Science


Dive into the research topics of 'Learn2Sign: Explainable AI for sign language learning'. Together they form a unique fingerprint.

Cite this