Indoor image representation by high-level semantic features

Sitaula, Chiranjibi, Xiang, Yong, Zhang, Yushu, Lu, Xuequan and Aryal, Sunil 2019, Indoor image representation by high-level semantic features, IEEE access, vol. 7, pp. 84967-84979, doi: 10.1109/ACCESS.2019.2925002.

Attached Files
Name Description MIMEType Size Downloads

Title Indoor image representation by high-level semantic features
Author(s) Sitaula, Chiranjibi
Xiang, YongORCID iD for Xiang, Yong orcid.org/0000-0003-3545-7863
Zhang, YushuORCID iD for Zhang, Yushu orcid.org/0000-0001-8183-8435
Lu, XuequanORCID iD for Lu, Xuequan orcid.org/0000-0003-0959-408X
Aryal, SunilORCID iD for Aryal, Sunil orcid.org/0000-0002-6639-6824
Journal name IEEE access
Volume number 7
Start page 84967
End page 84979
Total pages 13
Publisher Institute of Electrical and Electronics Engineers
Place of publication Piscataway, N.J.
Publication date 2019
ISSN 2169-3536
2169-3536
Keyword(s) Image classification
Feature extraction
Image representation
Objects pattern dictionary
Semantic objects
Summary Indoor image features extraction is a fundamental problem in multiple fields such as image processing, pattern recognition, robotics, and so on. Nevertheless, most of the existing feature extraction methods, which extract features based on pixels, color, shape/object parts or objects on images, suffer from limited capabilities in describing semantic information (e.g., object association). These techniques, therefore, involve undesired classification performance. To tackle this issue, we propose the notion of high-level semantic features and design four steps to extract them. Specifically, we first construct the objects pattern dictionary through extracting raw objects in the images, and then retrieve and extract semantic objects from the objects pattern dictionary. We finally extract our high-level semantic features based on the calculated probability and delta parameter. The experiments on three publicly available datasets (MIT-67, Scene15, and NYU V1) show that our feature extraction approach outperforms the state-of-the-art feature extraction methods for indoor image classification, given a lower dimension of our features than those methods.
Language eng
DOI 10.1109/ACCESS.2019.2925002
Indigenous content off
HERDC Research category C1 Refereed article in a scholarly journal
Copyright notice ©2019, IEEE
Persistent URL http://hdl.handle.net/10536/DRO/DU:30128209

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 0 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 51 Abstract Views, 1 File Downloads  -  Detailed Statistics
Created: Mon, 29 Jul 2019, 13:15:57 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.