Deakin University
Browse

File(s) under permanent embargo

Scene image representation by foreground, background and hybrid features

journal contribution
posted on 2021-11-01, 00:00 authored by Chiranjibi Sitaula, Yong XiangYong Xiang, Sunil AryalSunil Aryal, Xuequan Lu
Previous methods for representing scene images based on deep learning primarily consider either the foreground or background information as the discriminating clues for the classification task. However, scene images also require additional information (hybrid) to cope with the inter-class similarity and intra-class variation problems. In this paper, we propose to use hybrid features in addition to foreground and background features to represent scene images. We suppose that these three types of information could jointly help to represent scene images more accurately. To this end, we adopt three VGG-16 architectures pre-trained on ImageNet, Places, and Hybrid (both ImageNet and Places) datasets for the corresponding extraction of foreground, background and hybrid information. All these three types of deep features are further aggregated to achieve our final features for the representation of scene images. Extensive experiments on two large benchmark scene datasets (MIT-67 and SUN-397) show that our method produces the state-of-the-art classification performance.

History

Journal

Expert systems with applications

Volume

182

Article number

115285

Pagination

1 - 10

Publisher

Elsevier

Location

Amsterdam, The Netherlands

ISSN

0957-4174

Language

eng

Publication classification

C1 Refereed article in a scholarly journal