Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/124481
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Just-in-time reconstruction: Inpainting sparse maps using single view depth predictors as priors
Author: Weerasekera, C.S.
Dharmasiri, T.
Garg, R.
Drummond, T.
Reid, I.
Citation: IEEE International Conference on Robotics and Automation, 2018, pp.4977-4984
Publisher: IEEE
Publisher Place: Piscataway, NJ.
Issue Date: 2018
Series/Report no.: IEEE International Conference on Robotics and Automation ICRA
ISBN: 1538630818
9781538630815
ISSN: 1050-4729
2577-087X
Conference Name: IEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia)
Statement of
Responsibility: 
Chamara Saroj Weerasekera, Thanuja Dharmasiri, Ravi Garg, Tom Drummond and Ian Reid
Abstract: We present “just-in-time reconstruction” as realtime image-guided inpainting of a map with arbitrary scale and sparsity to generate a fully dense depth map for the image. In particular, our goal is to inpaint a sparse map — obtained from either a monocular visual SLAM system or a sparse sensor — using a single-view depth prediction network as a virtual depth sensor. We adopt a fairly standard approach to data fusion, to produce a fused depth map by performing inference over a novel fully-connected Conditional Random Field (CRF) which is parameterized by the input depth maps and their pixel-wise confidence weights. Crucially, we obtain the confidence weights that parameterize the CRF model in a data-dependent manner via Convolutional Neural Networks (CNNs) which are trained to model the conditional depth error distributions given each source of input depth map and the associated RGB image. Our CRF model penalises absolute depth error in its nodes and pairwise scale-invariant depth error in its edges, and the confidence-based fusion minimizes the impact of outlier input depth values on the fused result. We demonstrate the flexibility of our method by real-time inpainting of ORB-SLAM, Kinect, and LIDAR depth maps acquired both indoors and outdoors at arbitrary scale and varied amount of irregular sparsity.
Rights: ©2018 IEEE
DOI: 10.1109/ICRA.2018.8460549
Grant ID: http://purl.org/au-research/grants/arc/FL130100102
http://purl.org/au-research/grants/arc/CE140100016
Published version: https://ieeexplore.ieee.org/xpl/conhome/8449910/proceeding
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.