TY - GEN
T1 - Assistive relative pose estimation for on-orbit assembly using convolutional neural networks
AU - Sonawani, Shubham
AU - Alimo, Ryan
AU - Detry, Renuad
AU - Jeong, Daniel
AU - Hess, Andrew
AU - Amor, Heni Ben
N1 - Funding Information:
The authors gratefully acknowledge funding from Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (NASA) in support of this work. The authors thank Vincenzo Capuano, Kyunam Kim, Kasra Yazdani, Kingson Man, and Jonathan Chu for the fruitful discussions; also, Amir Rahmani, David Hanks, Adrian Stoica, Navid Dehghani, Leon Alkalai, and Fred Hadaegh for their supports.
Publisher Copyright:
© 2020, American Institute of Aeronautics and Astronautics Inc, AIAA. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Accurate real-time pose estimation of spacecraft or object in space is a key capability necessary for on orbit spacecraft servicing and assembly tasks. Pose estimation of objects in space is more challenging than for objects on Earth due to space images containing widely varying illumination conditions, high contrast, and poor resolution in addition to power and mass constraints. In this paper, a convolutional neural network is leveraged to uniquely determine the translation and rotation of an object of interest relative to the camera. The main idea of using CNN model is to assist object tracker used in on space assembly tasks where only feature based method is always not sufficient. The simulation framework designed for assembly task is used to generate dataset for training the modified CNN models and, then results of different models are compared with measure of how accurately models are predicting the pose. Unlike many current approaches for spacecraft or object in space pose estimation, the model does not rely on hand-crafted object-specific features which makes this model more robust and easier to apply to other types of spacecraft. It is shown that the model performs comparable to the current feature-selection methods and can therefore be used in conjunction with them to provide more reliable estimates.
AB - Accurate real-time pose estimation of spacecraft or object in space is a key capability necessary for on orbit spacecraft servicing and assembly tasks. Pose estimation of objects in space is more challenging than for objects on Earth due to space images containing widely varying illumination conditions, high contrast, and poor resolution in addition to power and mass constraints. In this paper, a convolutional neural network is leveraged to uniquely determine the translation and rotation of an object of interest relative to the camera. The main idea of using CNN model is to assist object tracker used in on space assembly tasks where only feature based method is always not sufficient. The simulation framework designed for assembly task is used to generate dataset for training the modified CNN models and, then results of different models are compared with measure of how accurately models are predicting the pose. Unlike many current approaches for spacecraft or object in space pose estimation, the model does not rely on hand-crafted object-specific features which makes this model more robust and easier to apply to other types of spacecraft. It is shown that the model performs comparable to the current feature-selection methods and can therefore be used in conjunction with them to provide more reliable estimates.
UR - http://www.scopus.com/inward/record.url?scp=85092349332&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092349332&partnerID=8YFLogxK
U2 - 10.2514/6.2020-2096
DO - 10.2514/6.2020-2096
M3 - Conference contribution
AN - SCOPUS:85092349332
SN - 9781624105951
T3 - AIAA Scitech 2020 Forum
SP - 1
EP - 11
BT - AIAA Scitech 2020 Forum
PB - American Institute of Aeronautics and Astronautics Inc, AIAA
T2 - AIAA Scitech Forum, 2020
Y2 - 6 January 2020 through 10 January 2020
ER -