TY - GEN
T1 - Weakly Supervised Relative Spatial Reasoning for Visual Question Answering
AU - Banerjee, Pratyay
AU - Gokhale, Tejas
AU - Yang, Yezhou
AU - Baral, Chitta
N1 - Funding Information:
The paradigm of pre-trained models that learn the correspondence between images and text has resulted in improvements across a wide range of V&L tasks. Spatial reasoning poses the unique challenge of understanding not only the semantics of the scene, but the physical and geometric properties of the scene. One stream of work has approached this task from the perspective of sequential instruction-following using program supervision. In contrast, our work is the first to jointly model geometric understanding and V&L in the same training pipeline, via weak supervision from depth estimators. We show that this increases the faithfulness between spatial reasoning and visual question answering, and improves performance on the GQA dataset in both fully supervised and few-shot settings. While in this work, we have used depthmaps as weak supervision, many other concepts from physics-based vision could further come to the aid of V&L reasoning. Future work could also consider spatial reasoning in V&L settings without access to bounding boxes or reliable object detectors (for instance in bad weather and/or low-light settings). Challenges such as these could potentially reveal the role that geometric and physics-based visual signals could play in robust visual reasoning. Acknowledgments. The authors acknowledge support from the NSF via grants #1750082 and #1816039, DARPA SAIL-ON program #W911NF2020006, and ONR award #N00014-20-1-2332.
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Vision-and-language (V&L) reasoning necessitates perception of visual concepts such as objects and actions, understanding semantics and language grounding, and reasoning about the interplay between the two modalities. One crucial aspect of visual reasoning is spatial understanding, which involves understanding relative locations of objects, i.e. implicitly learning the geometry of the scene. In this work, we evaluate the faithfulness of V&L models to such geometric understanding, by formulating the prediction of pair-wise relative locations of objects as a classification as well as a regression task. Our findings suggest that state-of-the-art transformer-based V&L models lack sufficient abilities to excel at this task. Motivated by this, we design two objectives as proxies for 3D spatial reasoning (SR) - object centroid estimation, and relative position estimation, and train V&L with weak supervision from off-the-shelf depth estimators. This leads to considerable improvements in accuracy for the “GQA” visual question answering challenge (in fully supervised, few-shot, and O.O.D settings) as well as improvements in relative spatial reasoning. Code and data will be released here.
AB - Vision-and-language (V&L) reasoning necessitates perception of visual concepts such as objects and actions, understanding semantics and language grounding, and reasoning about the interplay between the two modalities. One crucial aspect of visual reasoning is spatial understanding, which involves understanding relative locations of objects, i.e. implicitly learning the geometry of the scene. In this work, we evaluate the faithfulness of V&L models to such geometric understanding, by formulating the prediction of pair-wise relative locations of objects as a classification as well as a regression task. Our findings suggest that state-of-the-art transformer-based V&L models lack sufficient abilities to excel at this task. Motivated by this, we design two objectives as proxies for 3D spatial reasoning (SR) - object centroid estimation, and relative position estimation, and train V&L with weak supervision from off-the-shelf depth estimators. This leads to considerable improvements in accuracy for the “GQA” visual question answering challenge (in fully supervised, few-shot, and O.O.D settings) as well as improvements in relative spatial reasoning. Code and data will be released here.
UR - http://www.scopus.com/inward/record.url?scp=85127796180&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127796180&partnerID=8YFLogxK
U2 - 10.1109/ICCV48922.2021.00192
DO - 10.1109/ICCV48922.2021.00192
M3 - Conference contribution
AN - SCOPUS:85127796180
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 1888
EP - 1898
BT - Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
Y2 - 11 October 2021 through 17 October 2021
ER -