Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/108877
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Type: | Conference paper |
Title: | Hand parsing for fine-grained recognition of human grasps in monocular images |
Author: | Saran, A. Teney, D. Kitani, K. |
Citation: | Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2015, vol.2015-December, pp.5052-5058 |
Publisher: | IEEE |
Issue Date: | 2015 |
Series/Report no.: | IEEE International Conference on Intelligent Robots and Systems |
ISBN: | 9781479999941 |
ISSN: | 2153-0858 2153-0866 |
Conference Name: | IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (28 Sep 2015 - 2 Oct 2015 : Hamburg, Germany) |
Statement of Responsibility: | Akanksha Saran, Damien Teney and Kris M. Kitani |
Abstract: | We propose a novel method for performing finegrained recognition of human hand grasp types using a single monocular image to allow computational systems to better understand human hand use. In particular, we focus on recognizing challenging grasp categories which differ only by subtle variations in finger configurations. While much of the prior work on understanding human hand grasps has been based on manual detection of grasps in video, this is the first work to automate the analysis process for fine-grained grasp classification. Instead of attempting to utilize a parametric model of the hand, we propose a hand parsing framework which leverages a data-driven learning to generate a pixelwise segmentation of a hand into finger and palm regions. The proposed approach makes use of appearance-based cues such as finger texture and hand shape to accurately determine hand parts. We then build on the hand parsing result to compute high-level grasp features to learn a supervised fine-grained grasp classifier. To validate our approach, we introduce a grasp dataset recorded with a wearable camera, where the hand and its parts have been manually segmented with pixel-wise accuracy. Our results show that our proposed automatic hand parsing technique can improve grasp classification accuracy by over 30 percentage points over a state-of-the-art grasp recognition technique. |
Rights: | © 2015 IEEE |
DOI: | 10.1109/IROS.2015.7354088 |
Published version: | http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7347169 |
Appears in Collections: | Aurora harvest 3 Computer Science publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.