You are not logged in.

Making the most of the self-quotient image in face recognition

Arandjelovic, Ognjen 2013, Making the most of the self-quotient image in face recognition, in FG 2013 : Proceedings of the 10th IEEE International Conference on Automatic Face and Gesture Recognition, IEEE, Piscataway, N.J., pp. 1-7, doi: 10.1109/FG.2013.6553708.

Attached Files
Name Description MIMEType Size Downloads

Title Making the most of the self-quotient image in face recognition
Author(s) Arandjelovic, Ognjen
Conference name Automatic Face and Gesture Recognition. IEEE International Conference (10th : 2013 : Shanghai, China)
Conference location Shanghai, China
Conference dates 22-26 Apr. 2013
Title of proceedings FG 2013 : Proceedings of the 10th IEEE International Conference on Automatic Face and Gesture Recognition
Editor(s) [Unknown]
Publication date 2013
Conference series IEEE International Conference on Automatic Face and Gesture Recognition
Start page 1
End page 7
Total pages 7
Publisher IEEE
Place of publication Piscataway, N.J.
Summary The self-quotient image is a biologically inspired representation which has been proposed as an illumination invariant feature for automatic face recognition. Owing to the lack of strong domain specific assumptions underlying this representation, it can be readily extracted from raw images irrespective of the persons's pose, facial expression etc. What makes the self-quotient image additionally attractive is that it can be computed quickly and in a closed form using simple low-level image operations. However, it is generally accepted that the self-quotient is insufficiently robust to large illumination changes which is why it is mainly used in applications in which low precision is an acceptable compromise for high recall (e.g. retrieval systems). Yet, in this paper we demonstrate that the performance of this representation in challenging illuminations has been greatly underestimated. We show that its error rate can be reduced by over an order of magnitude, without any changes to the representation itself. Rather, we focus on the manner in which the dissimilarity between two self-quotient images is computed. By modelling the dominant sources of noise affecting the representation, we propose and evaluate a series of different dissimilarity measures, the best of which reduces the initial error rate of 63.0% down to only 5.7% on the notoriously challenging YaleB data set.
ISBN 1467355453
9781467355452
Language eng
DOI 10.1109/FG.2013.6553708
Field of Research 080104 Computer Vision
080106 Image Processing
Socio Economic Objective 970108 Expanding Knowledge in the Information and Computing Sciences
HERDC Research category E1 Full written paper - refereed
HERDC collection year 2013
Copyright notice ©2013, IEEE
Persistent URL http://hdl.handle.net/10536/DRO/DU:30057145

Document type: Conference Paper
Collection: Centre for Pattern Recognition and Data Analytics
Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 14 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 86 Abstract Views, 3 File Downloads  -  Detailed Statistics
Created: Wed, 23 Oct 2013, 09:59:21 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.