Learning complementary saliency priors for foreground object segmentation in complex scenes

Tian, Yonghong, Li, Jia, Yu, Shui and Huang, Tiejun 2015, Learning complementary saliency priors for foreground object segmentation in complex scenes, International journal of computer vision, vol. 111, no. 2, pp. 153-170, doi: 10.1007/s11263-014-0737-1.

Attached Files
Name Description MIMEType Size Downloads

Title Learning complementary saliency priors for foreground object segmentation in complex scenes
Author(s) Tian, Yonghong
Li, Jia
Yu, ShuiORCID iD for Yu, Shui orcid.org/0000-0003-4485-6743
Huang, Tiejun
Journal name International journal of computer vision
Volume number 111
Issue number 2
Start page 153
End page 170
Total pages 18
Publisher Springer
Place of publication Berlin, Germany
Publication date 2015-01
ISSN 0920-5691
1573-1405
Keyword(s) Complementary saliency map
Foreground object segmentation
Graph cuts
Visual saliency
Science & Technology
Technology
Computer Science, Artificial Intelligence
Computer Science
REGION DETECTION
ENERGY MINIMIZATION
EXTRACTION
ATTENTION
MODEL
Summary Object segmentation is widely recognized as one of the most challenging problems in computer vision. One major problem of existing methods is that most of them are vulnerable to the cluttered background. Moreover, human intervention is often required to specify foreground/background priors, which restricts the usage of object segmentation in real-world scenario. To address these problems, we propose a novel approach to learn complementary saliency priors for foreground object segmentation in complex scenes. Different from existing saliency-based segmentation approaches, we propose to learn two complementary saliency maps that reveal the most reliable foreground and background regions. Given such priors, foreground object segmentation is formulated as a binary pixel labelling problem that can be efficiently solved using graph cuts. As such, the confident saliency priors can be utilized to extract the most salient objects and reduce the distraction of cluttered background. Extensive experiments show that our approach outperforms 16 state-of-the-art methods remarkably on three public image benchmarks.
Language eng
DOI 10.1007/s11263-014-0737-1
Field of Research 080106 Image Processing
Socio Economic Objective 890106 Videoconference Services
HERDC Research category C1 Refereed article in a scholarly journal
ERA Research output type C Journal article
HERDC collection year 2014
Copyright notice ©2015, Springer
Persistent URL http://hdl.handle.net/10536/DRO/DU:30072508

Document type: Journal Article
Collection: School of Information Technology
Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 19 times in TR Web of Science
Scopus Citation Count Cited 23 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 212 Abstract Views, 2 File Downloads  -  Detailed Statistics
Created: Wed, 22 Apr 2015, 15:58:41 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.