TY - GEN
T1 - Generating Active Explicable Plans in Human-Robot Teaming
AU - Hanni, Akkamahadevi
AU - Zhang, Yu
N1 - Funding Information:
ACKNOWLEDGMENT This research is supported in part by the NSF grants 1844524, 2047186, the NASA grant NNX17AD06G, and the
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Intelligent robots are redefining a multitude of critical domains but are still far from being fully capable of assisting human peers in day-to-day tasks. An important requirement of collaboration is for each teammate to maintain and respect an understanding of the others' expectations of itself. Lack of which may lead to serious issues such as loose coordination between teammates, reduced situation awareness, and ultimately teaming failures. Hence, it is important for robots to behave explicably by meeting the human's expectations. One of the challenges here is that the expectations of the human are often hidden and can change dynamically as the human interacts with the robot. However, existing approaches to generating explicable plans often assume that the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to relax this assumption. We apply a Bayesian approach to model and predict dynamic human belief and expectations to make explicable planning more anticipatory. We hypothesize that active explicable plans can be more efficient and explicable at the same time, when compared to explicable plans generated by the existing methods. In our experimental evaluation, we verify that our approach generates more efficient explicable plans while successfully capturing the dynamic belief change of the human teammate.
AB - Intelligent robots are redefining a multitude of critical domains but are still far from being fully capable of assisting human peers in day-to-day tasks. An important requirement of collaboration is for each teammate to maintain and respect an understanding of the others' expectations of itself. Lack of which may lead to serious issues such as loose coordination between teammates, reduced situation awareness, and ultimately teaming failures. Hence, it is important for robots to behave explicably by meeting the human's expectations. One of the challenges here is that the expectations of the human are often hidden and can change dynamically as the human interacts with the robot. However, existing approaches to generating explicable plans often assume that the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to relax this assumption. We apply a Bayesian approach to model and predict dynamic human belief and expectations to make explicable planning more anticipatory. We hypothesize that active explicable plans can be more efficient and explicable at the same time, when compared to explicable plans generated by the existing methods. In our experimental evaluation, we verify that our approach generates more efficient explicable plans while successfully capturing the dynamic belief change of the human teammate.
UR - http://www.scopus.com/inward/record.url?scp=85124366328&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124366328&partnerID=8YFLogxK
U2 - 10.1109/IROS51168.2021.9636643
DO - 10.1109/IROS51168.2021.9636643
M3 - Conference contribution
AN - SCOPUS:85124366328
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 2993
EP - 2998
BT - IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021
Y2 - 27 September 2021 through 1 October 2021
ER -