posted on 2008-01-01, 00:00authored byX Geng, L Wang, Ming Li, Q Wu, K Smith-Miles
Most work on multi-biometric fusion is based on static fusion rules which cannot respond to the changes of the environment and the individual users. This paper proposes adaptive multi-biometric fusion, which dynamically adjusts the fusion rules to suit the real-time external conditions. As a typical example, the adaptive fusion of gait and face in video is studied. Two factors that may affect the relationship between gait and face in the fusion are considered, i.e., the view angle and the subject-to-camera distance. Together they determine the way gait and face are fused at an arbitrary time. Experimental results show that the adaptive fusion performs significantly better than not only single biometric traits, but also those widely adopted static fusion rules including SUM, PRODUCT, MIN, and MAX.
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Publication classification
E1 Full written paper - refereed
Copyright notice
2008, IEEE
Title of proceedings
WACV 2008 : Proceedings of the IEEE 2008 Workshop on Application of Computer Vision