Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/87284
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: On combining visual SLAM and visual odometry
Author: Williams, B.
Reid, I.
Citation: IEEE International Conference on Robotics and Automation, 2010, pp.3494-3500
Publisher: IEEE
Publisher Place: USA
Issue Date: 2010
Series/Report no.: IEEE International Conference on Robotics and Automation ICRA
ISBN: 9781424450404
ISSN: 1050-4729
2577-087X
Conference Name: IEEE International Conference on Robotics and Automation (ICRA) (3 May 2010 - 7 May 2010 : Anchorage, AK)
Statement of
Responsibility: 
Brian Williams and Ian Reid
Abstract: Sequential monocular SLAM systems perform drift free tracking of the pose of a camera relative to a jointly estimated map of landmarks. To allow real-time operation in moderately sized environments, the map is kept quite spare with usually only tens of landmarks visible in each frame. In contrast, visual odometry techniques track hundreds of visual features per frame. This leads to a very accurate estimate of the relative camera motion, but without a persistent map, the estimate tends to drift over time. We demonstrate a new monocular SLAM system which combines the benefits of these two techniques. In addition to maintaining a sparse map of landmarks in the world, our system finds as many inter-frame point matches as possible. These point matches provide additional constraints on the inter-frame motion of the camera leading to a more accurate pose estimate, and, since they are not maintained as full map landmarks, they do not cause a large increase in the computational cost. Our results in both a simulated environment and in real video demonstrate the improvement in estimation accuracy gained by the inclusion of visual odometry style observations. The constraints available from pairwise point matches are most naturally cast in the context of a camera-centric rather than world-centric frame. To that end we recast the usual world-centric EKF implementation of visual SLAM in a robo-centric frame. We show that this robo-centric visual SLAM, as expected, leads to the estimated uncertainty more closely matching the ideal uncertainty; i.e., that robo-centric visual SLAM yields a more consistent estimate than the traditional world-centric EKF algorithm.
Rights: ©2010 IEEE
DOI: 10.1109/ROBOT.2010.5509248
Published version: http://dx.doi.org/10.1109/robot.2010.5509248
Appears in Collections:Aurora harvest 7
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.