Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/138398
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Look no deeper: Recognizing places from opposing viewpoints under varying scene appearance using single-view depth estimation
Author: Garg, S.
Babu, M.V.
Dharmasiri, T.
Hausler, S.
Suenderhauf, N.
Kumar, S.
Drummond, T.
Milford, M.
Citation: IEEE International Conference on Robotics and Automation, 2019 / Howard, A., Althoefer, K., Arai, F., Arrichiello, F., Caputo, B., Castellanos, J., Hauser, K., Isler, V., Kim, J., Liu, H., Oh, P., Santos, V., Scaramuzza, D., Ude, A., Voyles, R., Yamane, K., Okamura, A. (ed./s), vol.2019-May, pp.4916-4923
Publisher: IEEE
Publisher Place: Piscataway, NJ, USA
Issue Date: 2019
Series/Report no.: IEEE International Conference on Robotics and Automation (ICRA)
ISBN: 9781538660263
ISSN: 1050-4729
2577-087X
Conference Name: International Conference on Robotics and Automation (ICRA) (20 May 2019 - 24 May 2019 : Montreal, Canada)
Editor: Howard, A.
Althoefer, K.
Arai, F.
Arrichiello, F.
Caputo, B.
Castellanos, J.
Hauser, K.
Isler, V.
Kim, J.
Liu, H.
Oh, P.
Santos, V.
Scaramuzza, D.
Ude, A.
Voyles, R.
Yamane, K.
Okamura, A.
Statement of
Responsibility: 
Sourav Garg, Madhu Babu V, Thanuja Dharmasiri, Stephen Hausler, Niko Suenderhauf, Swagat Kumar, Tom Drummond, Michael Milford
Abstract: Visual place recognition (VPR) - the act of recognizing a familiar visual place - becomes difficult when there is extreme environmental appearance change or viewpoint change. Particularly challenging is the scenario where both phenomena occur simultaneously, such as when returning for the first time along a road at night that was previously traversed during the day in the opposite direction. While such problems can be solved with panoramic sensors, humans solve this problem regularly with limited field-of-view vision and without needing to constantly turn around. In this paper, we present a new depth- and temporal-aware visual place recognition system that solves the opposing viewpoint, extreme appearancechange visual place recognition problem. Our system performs sequence-to-single frame matching by extracting depth-filtered keypoints using a state-of-the-art depth estimation pipeline, constructing a keypoint sequence over multiple frames from the reference dataset, and comparing these keypoints to the keypoints extracted from a single query image. We evaluate the system on a challenging benchmark dataset and show that it consistently outperforms state-of-the-art techniques. We also develop a range of diagnostic simulation experiments that characterize the contribution of depth-filtered keypoint sequences with respect to key domain parameters including the degree of appearance change and camera motion.
Rights: ©2019 IEEE
DOI: 10.1109/ICRA.2019.8794178
Grant ID: http://purl.org/au-research/grants/arc/FT140101229
Published version: https://ieeexplore.ieee.org/xpl/conhome/8780387/proceeding
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.