Role of Mixup in Topological Persistence-Based Knowledge Distillation for Wearable Sensor Data

Eun Som Jeon, Hongjun Choi, Matthew P. Buman, Pavan Turaga

Research output: Contribution to journalArticlepeer-review

Abstract

The analysis of wearable sensor data has enabled many successes in several applications. To represent the high-sampling rate time series with sufficient detail, the use of topological data analysis (TDA) has been considered, and it is found that TDA can complement other time-series features. Nonetheless, due to the large time consumption and high computational resource requirements of extracting topological features through TDA, it is difficult to deploy topological knowledge in machine learning and various applications. In order to tackle this problem, knowledge distillation (KD) can be adopted, which is a technique facilitating model compression and transfer learning to generate a smaller model by transferring knowledge from a larger network. By leveraging multiple teachers in KD, both time-series and topological features can be transferred, and finally, a superior student using only time-series data is distilled. On the other hand, mixup has been popularly used as a robust data augmentation technique to enhance model performance during training. Mixup and KD employ similar learning strategies. In KD, the student model learns from the smoothed distribution generated by the teacher model, while mixup creates smoothed labels by blending two labels. Hence, this common smoothness serves as the connecting link that establishes a connection between these two methods. Even though it has been widely studied to understand the interplay between mixup and KD, most of them are focused on image-based analysis only, and it still remains to be understood how mixup behaves in the context of KD for incorporating multimodal data, such as both time-series and topological knowledge using wearable sensor data. In this article, we analyze the role of mixup in KD with time series as well as topological persistence, employing multiple teachers. We present a comprehensive analysis of various methods in KD and mixup, supported by empirical results on wearable sensor data. We observe that applying a mixup to training a student in KD improves performance. We suggest a general set of recommendations to obtain an enhanced student.

Original languageEnglish (US)
Pages (from-to)5853-5865
Number of pages13
JournalIEEE Sensors Journal
Volume25
Issue number3
DOIs
StatePublished - 2025

Keywords

  • Knowledge distillation (KD)
  • time series
  • topological persistence
  • wearable sensor data

ASJC Scopus subject areas

  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Role of Mixup in Topological Persistence-Based Knowledge Distillation for Wearable Sensor Data'. Together they form a unique fingerprint.

Cite this