Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/70428
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Superpixel-Based Object Class Segmentation Using Conditional Random Fields
Author: Li, Xi
Sahbi, Hichem
Citation: 2011 IEEE International Conference on Acoustics, Speech, and Signal Processing : Proceedings, 2011: pp.1101-2204
Publisher: IEEE
Issue Date: 2011
ISBN: 9781457705397
Conference Name: IEEE International Conference on Acoustics, Speech and Signal Processing (36th : 2011 : Prague, Czech Republic)
ICASSP 2011
School/Discipline: School of Computer Science
Statement of
Responsibility: 
Xi Li and Hichem Sahbi
Abstract: Object class segmentation (OCS) is a key issue in semantic scene labeling and understanding. Its general principle consists of naming object entities into scenes according to their intrinsic visual features as well as their dependencies. In this paper, we propose a novel superpixel-based framework for object class segmentation using conditional random fields (CRFs). The framework proceeds in two steps: (i) superpixel label estimate; and (ii) CRF label propagation. Step (i) is achieved using multi-scale boosted classifiers over superpixels and makes it possible to find coarse estimates of initial labels. Fine labeling is afterward achieved in Step (ii), using an anisotropic contrast sensitive pairwise function designed in order to characterize the intrinsic interaction potentials between objects according to 4-neighborhoods. Finally, a higher-order criterion is applied to enforce region label consistency of OCS. Experimental results demonstrate the effectiveness of the proposed framework.
Rights: Copyright ©2011 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
DOI: 10.1109/ICASSP.2011.5946600
Description (link): http://www.icassp2011.com/en/welcome
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.