TY - GEN
T1 - Plug-n-learn
T2 - 53rd Annual ACM IEEE Design Automation Conference, DAC 2016
AU - Rokni, Seyed Ali
AU - Ghasemzadeh, Hassan
N1 - Publisher Copyright:
© 2016 ACM.
PY - 2016/6/5
Y1 - 2016/6/5
N2 - Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage computational and machine learning algorithms to detect events of interest such as physical activities and medical complications. A major obstacle in large-scale utilization of current wearables is that their computational algorithms need to be re-built from scratch upon any changes in the configuration of the network. Retraining of these algorithms requires significant amount of labeled training data, a process that is labor-intensive, time-consuming, and infeasible. We propose an approach for automatic retraining of the machine learning algorithms in real-time without need for any labeled training data. We measure the inherent correlation between observations made by an old sensor view for which trained algorithms exist and the new sensor view for which an algorithm needs to be developed. By applying our real-time multi-view autonomous learning approach, we achieve an accuracy of 80.66% in activity recognition, which is an improvement of 15.96% in the accuracy due to the automatic labeling of the data in the new sensor node. This performance is only 7.96% lower than the experimental upper bound where labeled training data are collected with the new sensor.
AB - Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage computational and machine learning algorithms to detect events of interest such as physical activities and medical complications. A major obstacle in large-scale utilization of current wearables is that their computational algorithms need to be re-built from scratch upon any changes in the configuration of the network. Retraining of these algorithms requires significant amount of labeled training data, a process that is labor-intensive, time-consuming, and infeasible. We propose an approach for automatic retraining of the machine learning algorithms in real-time without need for any labeled training data. We measure the inherent correlation between observations made by an old sensor view for which trained algorithms exist and the new sensor view for which an algorithm needs to be developed. By applying our real-time multi-view autonomous learning approach, we achieve an accuracy of 80.66% in activity recognition, which is an improvement of 15.96% in the accuracy due to the automatic labeling of the data in the new sensor node. This performance is only 7.96% lower than the experimental upper bound where labeled training data are collected with the new sensor.
UR - http://www.scopus.com/inward/record.url?scp=84977085563&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84977085563&partnerID=8YFLogxK
U2 - 10.1145/2897937.2898066
DO - 10.1145/2897937.2898066
M3 - Conference contribution
AN - SCOPUS:84977085563
T3 - Proceedings - Design Automation Conference
BT - Proceedings of the 53rd Annual Design Automation Conference, DAC 2016
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 5 June 2016 through 9 June 2016
ER -