Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/138395
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Don't look back: Robustifying place categorization for viewpoint- and condition-invariant place recognition
Author: Garg, S.
Suenderhauf, N.
Milford, M.
Citation: IEEE International Conference on Robotics and Automation, 2018, pp.3645-3652
Publisher: IEEE
Publisher Place: Piscataway, NJ.
Issue Date: 2018
Series/Report no.: IEEE International Conference on Robotics and Automation ICRA
ISBN: 1538630818
9781538630815
ISSN: 1050-4729
2577-087X
Conference Name: IEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia)
Statement of
Responsibility: 
Sourav Garg, Niko Suenderhauf and Michael Milford
Abstract: When a human drives a car along a road for the first time, they later recognize where they are on the return journey typically without needing to look in their rear view mirror or turn around to look back, despite significant viewpoint and appearance change. Such navigation capabilities are typically attributed to our semantic visual understanding of the environment [1] beyond geometry to recognizing the types of places we are passing through such as “passing a shop on the left” or “moving through a forested area”. Humans are in effect using place categorization [2] to perform specific place recognition even when the viewpoint is 180 degrees reversed. Recent advances in deep neural networks have enabled high performance semantic understanding of visual places and scenes, opening up the possibility of emulating what humans do. In this work, we develop a novel methodology for using the semantics-aware higher-order layers of deep neural networks for recognizing specific places from within a reference database. To further improve the robustness to appearance change, we develop a descriptor normalization scheme that builds on the success of normalization schemes for pure appearance-based techniques such as SeqSLAM [3]. Using two different datasets — one road-based, one pedestrian-based, we evaluate the performance of the system in performing place recognition on reverse traversals of a route with a limited field of view camera and no turn-back-and-look behaviours, and compare to existing stateof- the-art techniques and vanilla off-the-shelf features. The results demonstrate significant improvements over the existing state of the art, especially for extreme perceptual challenges that involve both great viewpoint change and environmental appearance change. We also provide experimental analyses of the contributions of the various system components: the use of spatio-temporal sequences, place categorization and placecentric characteristics as opposed to object-centric semantics.
Rights: ©2018 IEEE
DOI: 10.1109/ICRA.2018.8461051
Grant ID: http://purl.org/au-research/grants/arc/CE140100016
http://purl.org/au-research/grants/arc/FT140101229
Published version: https://ieeexplore.ieee.org/xpl/conhome/8449910/proceeding
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.