TY - GEN
T1 - What is it you really want of me? Generalized reward learning with biased beliefs about domain dynamics
AU - Gong, Ze
AU - Zhang, Yu
N1 - Funding Information:
This research is supported in part by the NSF grant IIS-1844524, the NASA grant NNX17AD06G, and the AFOSR grant FA9550-18-1-0067.
Publisher Copyright:
Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2020
Y1 - 2020
N2 - Reward learning as a method for inferring human intent and preferences has been studied extensively. Prior approaches make an implicit assumption that the human maintains a correct belief about the robot’s domain dynamics. However, this may not always hold since the human’s belief may be biased, which can ultimately lead to a misguided estimation of the human’s intent and preferences, which is often derived from human feedback on the robot’s behaviors. In this paper, we remove this restrictive assumption by considering that the human may have an inaccurate understanding of the robot. We propose a method called Generalized Reward Learning with biased beliefs about domain dynamics (GeReL) to infer both the reward function and human’s belief about the robot in a Bayesian setting based on human ratings. Due to the complex forms of the posteriors, we formulate it as a variational inference problem to infer the posteriors of the parameters that govern the reward function and human’s belief about the robot simultaneously. We evaluate our method in a simulated domain and with a user study where the user has a bias based on the robot’s appearances. The results show that our method can recover the true human preferences while subject to such biased beliefs, in contrast to prior approaches that could have misinterpreted them completely.
AB - Reward learning as a method for inferring human intent and preferences has been studied extensively. Prior approaches make an implicit assumption that the human maintains a correct belief about the robot’s domain dynamics. However, this may not always hold since the human’s belief may be biased, which can ultimately lead to a misguided estimation of the human’s intent and preferences, which is often derived from human feedback on the robot’s behaviors. In this paper, we remove this restrictive assumption by considering that the human may have an inaccurate understanding of the robot. We propose a method called Generalized Reward Learning with biased beliefs about domain dynamics (GeReL) to infer both the reward function and human’s belief about the robot in a Bayesian setting based on human ratings. Due to the complex forms of the posteriors, we formulate it as a variational inference problem to infer the posteriors of the parameters that govern the reward function and human’s belief about the robot simultaneously. We evaluate our method in a simulated domain and with a user study where the user has a bias based on the robot’s appearances. The results show that our method can recover the true human preferences while subject to such biased beliefs, in contrast to prior approaches that could have misinterpreted them completely.
UR - http://www.scopus.com/inward/record.url?scp=85104925954&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104925954&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85104925954
T3 - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
SP - 2485
EP - 2492
BT - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
PB - AAAI press
T2 - 34th AAAI Conference on Artificial Intelligence, AAAI 2020
Y2 - 7 February 2020 through 12 February 2020
ER -